[GENERAL] Why can't the database owner create schemas and how can I enable that?

2017-09-22 Thread Tim Uckun
I am setting up a database for an app. So I create a user without superuser but with login priveleges I then create a database and set it's owner to that user like this... dropdb --if-exists api_development dropuser --if-exists api_user createuser api_user -P -d createdb api_development -O

[GENERAL] Immutable functions and cache invalidation.

2017-08-26 Thread Tim Uckun
Say I have a function like this. CREATE OR REPLACE FUNCTION some_constant( ) RETURNS text LANGUAGE 'plpgsql' COST 100 IMMUTABLE ROWS 0 AS $BODY$ begin return 'some_string'; end; $BODY$; Then I have another function that calls it but is also immutable CREATE OR REPLACE

Re: [GENERAL] Shared Constants in PLPGSQL

2017-08-01 Thread Tim Uckun
cycles. On Wed, Aug 2, 2017 at 1:04 AM, Merlin Moncure <mmonc...@gmail.com> wrote: > On Tue, Aug 1, 2017 at 4:56 AM, Tim Uckun <timuc...@gmail.com> wrote: > > What's the best way to deal with global constants in PLPGSQL. Currently > I am > > putting them in

[GENERAL] Shared Constants in PLPGSQL

2017-08-01 Thread Tim Uckun
What's the best way to deal with global constants in PLPGSQL. Currently I am putting them in a function with out parameters and then calling that function from every other function that needs them like this. CREATE OR REPLACE FUNCTION hashids.constants( OUT min_alphabet_length integer,

Re: [GENERAL] Developer GUI tools for PostgreSQL

2017-07-28 Thread Tim Uckun
I don't run windows so I haven't looked at the EMS product. On Fri, Jul 28, 2017 at 6:53 PM, vinny <vi...@xs4all.nl> wrote: > On 2017-07-28 06:31, Tim Uckun wrote: > >> I think it's funny that after all these years pgadmin3 is still the >> most comprehensive GUI for p

Re: [GENERAL] Developer GUI tools for PostgreSQL

2017-07-27 Thread Tim Uckun
I think it's funny that after all these years pgadmin3 is still the most comprehensive GUI for postgres. Even though it's prone to crashing on my machine and I have paid for datagrip I still reach for it first. On Fri, Jul 28, 2017 at 2:46 AM, vinny wrote: > On 2017-07-27

Re: [GENERAL] Developer GUI tools for PostgreSQL

2017-07-26 Thread Tim Uckun
gt; > You might check this page out as well: > https://wiki.postgresql.org/wiki/Community_Guide_to_PostgreSQL_GUI_Tools > > <https://wiki.postgresql.org/wiki/Community_Guide_to_PostgreSQL_GUI_Tools> > Cheers, > > Brent Wood > -- > *From:

Re: [GENERAL] Developer GUI tools for PostgreSQL

2017-07-26 Thread Tim Uckun
I still use pgadmin3. I also have a subscription to the jetbrains tools so I also give datagrip a try once in a while. Datagrip has a lot going for it but it's lagging behind pgadmin3 is a lot of ways so it hasn't replaced it for me. On Thu, Jul 27, 2017 at 10:41 AM, Tiffany Thang

Re: [GENERAL] Dealing with ordered hierarchies

2017-07-24 Thread Tim Uckun
users are trying to update the hierarchy either by inserts or updates? I can definitely see a situation where we have issues transactions trip over each other. On Mon, Jul 24, 2017 at 10:32 PM, Alban Hertroys <haram...@gmail.com> wrote: > > > On 24 Jul 2017, at 9:02, Tim Uckun <

[GENERAL] Dealing with ordered hierarchies

2017-07-24 Thread Tim Uckun
I have read many articles about dealing with hierarchies in postgres including nested sets, ltree, materialized paths, using arrays as parentage, CTEs etc but nobody talks about the following scenario. Say I have a hierarchy like this 1 1.1 1.1.1 1.1.2 1.2 1.3 2 2.1 In this hierarchy the order

Re: [GENERAL] Is PL-PGSQL interpreted or complied?

2017-07-06 Thread Tim Uckun
Interesting, thank you. I was curious to know how it worked. Cheers.

[GENERAL] Is PL-PGSQL interpreted or complied?

2017-07-05 Thread Tim Uckun
I am curious about the stored proc languages inside of postgres. When I write a stored proc is it compiled to some internal representation or just interpreted? How does this work with other languages? Also would it be possible to extract PL-PGSQL into a standalone (albeit crippled) language? Is

Re: [GENERAL] "Reverse" inheritance?

2017-04-04 Thread Tim Uckun
ies like listing all > known unix variants; if that is hidden in the table names > then you'll have to issue DDL queries to do the work of SELECT queries, > which just sounds wrong to me. > > I'd go for a tree, possibly using recursive CTE's to dig it. > > > On 2017-0

Re: [GENERAL] "Reverse" inheritance?

2017-04-03 Thread Tim Uckun
I have thought of doing something like a single table inheritance and it could be done but I thought this might be a little more elegant. On Tue, Apr 4, 2017 at 2:15 PM, David G. Johnston < david.g.johns...@gmail.com> wrote: > On Mon, Apr 3, 2017 at 7:07 PM, Tim Uckun <timuc...@gmai

[GENERAL] "Reverse" inheritance?

2017-04-03 Thread Tim Uckun
I am trying to make postgres tables work like an object hierarchy. As an example I have done this. drop table if exists os.linux cascade; create table os.linux ( script_name character varying(255) NOT NULL, script_body text, CONSTRAINT os_linux_pkey PRIMARY KEY (script_name) ); drop table if

Re: [GENERAL] Performance PLV8 vs PLPGSQL

2016-12-31 Thread Tim Uckun
he best git GUI available. The devs made it an > absolute joy to use, and I never need to use the command line anymore. > > Hope this helps! > > On Dec 29, 2016, at 10:43 PM, Tim Uckun <timuc...@gmail.com> wrote: > > I have datagrip and it's OK but it doesn't really do everyt

Re: [GENERAL] Performance PLV8 vs PLPGSQL

2016-12-29 Thread Tim Uckun
> So here is my hat tip to the Postgres team for an awesome job they are > doing! > > On Dec 29, 2016, at 7:19 PM, Tim Uckun <timuc...@gmail.com> wrote: > > I am not saying the postgres core people should work on an IDE, just that > an IDE like thing would be nice. > > On Fri,

Re: [GENERAL] Performance PLV8 vs PLPGSQL

2016-12-29 Thread Tim Uckun
rk on an IDE. Focus on reading and writing faster and > faster ACID all the while. > > On Dec 29, 2016, at 5:32 PM, Tim Uckun <timuc...@gmail.com> wrote: > > Honestly I don't even like JS. Having said that I am not too crazy about > PL-PGSQL either. I am

Re: [GENERAL] Performance PLV8 vs PLPGSQL

2016-12-29 Thread Tim Uckun
; m...@webthatworks.it> wrote: > On 12/29/2016 10:35 AM, Pavel Stehule wrote: > > 2016-12-29 10:03 GMT+01:00 Tim Uckun <timuc...@gmail.com >> <mailto:timuc...@gmail.com>>: >> >> I think it's awesome that postgres allows you to code in different >

Re: [GENERAL] Performance PLV8 vs PLPGSQL

2016-12-29 Thread Tim Uckun
Mostly generating SQL statements to execute. Like for example deciding which partition to insert into. On Thu, Dec 29, 2016 at 10:00 PM, Pavel Stehule <pavel.steh...@gmail.com> wrote: > > > 2016-12-29 9:23 GMT+01:00 Tim Uckun <timuc...@gmail.com>: > >> I am not d

Re: [GENERAL] Performance PLV8 vs PLPGSQL

2016-12-29 Thread Tim Uckun
I think it's awesome that postgres allows you to code in different languages like this. It really is a unique development environment and one that is overlooked as a development platform. It would be nice if more languages were delivered in the default package especially lua, V8 and mruby. On

Re: [GENERAL] Performance PLV8 vs PLPGSQL

2016-12-29 Thread Tim Uckun
AM, Mike Sofen <mso...@runbox.com> wrote: > *From:* Tim Uckun > I have seen various links on the internet which indicate that PLV8 is > significantly faster than PL-PGSQL sometimes an order of magnitude faster. > > > > Is there any benefit to choosing PL-PGSQL? > &g

[GENERAL] Performance PLV8 vs PLPGSQL

2016-12-28 Thread Tim Uckun
I have seen various links on the internet which indicate that PLV8 is significantly faster than PL-PGSQL sometimes an order of magnitude faster. Is this uniformly true or is it just in certain circumstances? Is there any benefit to choosing PL-PGSQL? Is there work going on to make PL-PGSQL more

Re: [GENERAL] Is there a way to fix this ugliness

2016-09-09 Thread Tim Uckun
I could not make coalesce in the top level select to work even though people on the internet say it should work. I'll do the sub select that seems like it would work fine. On Sat, Sep 10, 2016 at 3:22 AM, Karl Czajkowski <kar...@isi.edu> wrote: > On Sep 10, Tim Uckun modulated: > &

[GENERAL] Is there a way to fix this ugliness

2016-09-09 Thread Tim Uckun
I am trying to get the child elements of a one to many table to be rolled up into a json field in the parent table. The query I am running is select ob.id ,case when array_position(array_agg(im.image_type), null) = 1 then '[]' else json_agg(row_to_json(im.*)) end as images

Re: [GENERAL] IDE for function/stored proc development.

2016-09-03 Thread Tim Uckun
gres. On Sat, Sep 3, 2016 at 11:03 PM, Pavel Stehule <pavel.steh...@gmail.com> wrote: > Hi > > 2016-09-03 11:36 GMT+02:00 Tim Uckun <timuc...@gmail.com>: > >> Does anybody use an IDE for doing heavy duty stored proc development? >> PGadmin is decent but I am lo

[GENERAL] IDE for function/stored proc development.

2016-09-03 Thread Tim Uckun
Does anybody use an IDE for doing heavy duty stored proc development? PGadmin is decent but I am looking for something better. I have tried jetbrains with the db browser plugin and on the surface it seems like a good choice but it's really buggy when working with procs. I also tried datagrip by

Re: [GENERAL] using a postgres table as a multi-writer multi-updater queue

2015-11-25 Thread Tim Uckun
I'll add my two cents. I set up something similar a while ago. Here are my suggestions for what they are worth. You don't ever want to delete from such a table so you need to set up something which allows you to truncate the tables when you no longer need them. One way to accomplish this is

Re: [GENERAL] Importing directly from BCP files

2015-11-16 Thread Tim Uckun
I don't see any documentation anywhere about the BCP format and as I said googling for "BCP format" gives a zillion links about the format files. Every project on github just calls out the BCP command but I am on linux and that doesn't help me at all. Bummer. This is going to be a huge pain to

Re: [GENERAL] Importing directly from BCP files

2015-11-16 Thread Tim Uckun
> > > On openSuSE 13.2 > > sudo zypper install freetds-tools > > Ubuntu 14.04 > > sudo apt-get install freetds-bin > > aklaver@killi:~> freebcp -h > usage: freebcp [[database_name.]owner.]table_name {in | out} datafile > [-m maxerrors] [-f formatfile] [-e errfile] > [-F firstrow]

Re: [GENERAL] Importing directly from BCP files

2015-11-16 Thread Tim Uckun
> > the binary form of the BCP file output is undocumented. So if all the BCP > files you have are the binary(native) version you are up this creek without > a paddle. > > Ugh. Yes it looks like that's the creek I am on. Thanks Microsoft! > So, moving to another creek. It depends on the

Re: [GENERAL] Importing directly from BCP files

2015-11-16 Thread Tim Uckun
I am going to ask them that but there are tons of legacy files which might need to be dealt with again in the future so I was hoping to use them directly.

[GENERAL] Importing directly from BCP files

2015-11-15 Thread Tim Uckun
Does anybody have any documentation on what a BCP file from SQL server looks like? I have a directory full of BCP files and I would like to import them into PG. Has anybody done anything like this before? Thanks

Re: [GENERAL] Importing directly from BCP files

2015-11-15 Thread Tim Uckun
> > > That appears to depend on the -f option > > > I have the format files so that's good. > > Have you tried opening a file in a text editor to see what they look like? > > Yes. It looks like a binary file with lots of non printable

Re: [GENERAL] Importing directly from BCP files

2015-11-15 Thread Tim Uckun
> > > I moved a database from MS Sql Server 2000 to Postgresql a few years > ago via BCP files. I used a Python script to do some fixup on the > BCP files to make them importable as CSV files into Postgresql. I > don't know if quirks I ran into are still an issue with newer versions > of Sql

Re: [GENERAL] Help with slow table update

2015-04-19 Thread Tim Uckun
On Sat, Apr 18, 2015 at 10:24 AM, Pawel Veselov pawel.vese...@gmail.com wrote: I found some dangling prepared transactions How do you find and remove these?

Re: [GENERAL] Benchmarking partitioning triggers and rules

2015-04-08 Thread Tim Uckun
So is there a third and even faster way of doing this? On Wed, Apr 8, 2015 at 4:08 PM, Adrian Klaver adrian.kla...@aklaver.com wrote: On 04/07/2015 07:49 PM, Tim Uckun wrote: I understand that there is overhead involved in parsing the strings and such. The amount of overhead was surprising

Re: [GENERAL] Benchmarking partitioning triggers and rules

2015-04-07 Thread Tim Uckun
I understand that there is overhead involved in parsing the strings and such. The amount of overhead was surprising to me but that's another matter. What I am really puzzled about is the difference between the statements EXECUTE 'INSERT INTO ' || quote_ident(partition_name) || ' SELECT

Re: [GENERAL] Name spacing functions and stored procedures

2015-03-19 Thread Tim Uckun
I guess I see schemas as ways to group data not functions. It would be very nice to be able to group your code into proper modules though. It's something I really miss.

[GENERAL] Name spacing functions and stored procedures

2015-03-18 Thread Tim Uckun
What do you guys do to namespace your functions so that they are not jumbled in with the system functions and also somewhat hierarchically organized. Obviously it's possible to create schemas for different namespaces but that can lead to a lot of schemas which hold no data. The other way is to

Re: [GENERAL] Benchmarking partitioning triggers and rules

2015-03-11 Thread Tim Uckun
Hey I hate to bump my post but I would really appreciate some input on this benchmark. I am very alarmed that adding a very simple partitioning trigger slows the insert speed by an order of magnitude. Is there any way to speed this up? On Mon, Mar 9, 2015 at 11:44 AM, Tim Uckun timuc

Re: [GENERAL] Benchmarking partitioning triggers and rules

2015-03-11 Thread Tim Uckun
wrote: On 11.3.2015 21:43, Tim Uckun wrote: Hey I hate to bump my post but I would really appreciate some input on this benchmark. I am very alarmed that adding a very simple partitioning trigger slows the insert speed by an order of magnitude. Is there any way to speed this up? I think

[GENERAL] Benchmarking partitioning triggers and rules

2015-03-08 Thread Tim Uckun
I wrote a quick benchmarking script to test various partitioning strategies. The code is here. https://gist.github.com/timuckun/954ab6bdce36fa14bc1c I was astonished at the variability of the timings between the different variations. The test data contained 270K records. I did a simple insert

Re: [GENERAL] Triggers and scalability in high transaction tables.

2015-02-26 Thread Tim Uckun
an external process. Thanks. On Fri, Feb 27, 2015 at 11:12 AM, Jerry Sievers gsiever...@comcast.net wrote: Tim Uckun timuc...@gmail.com writes: I want to write a trigger which runs semi-complicated code after each insert. I have done some reading and from what I can gather this could cause

[GENERAL] Triggers and scalability in high transaction tables.

2015-02-26 Thread Tim Uckun
I want to write a trigger which runs semi-complicated code after each insert. I have done some reading and from what I can gather this could cause problems because after insert triggers don't spill to the disk and can cause queue problems. Many people suggest LISTEN NOTIFY but that's not going

Re: [GENERAL] What's a reasonable maximum number for table partitions?

2015-02-13 Thread Tim Uckun
, 2015 at 11:15 AM, Bill Moran wmo...@potentialtech.com wrote: On Sat, 14 Feb 2015 11:14:10 +1300 Tim Uckun timuc...@gmail.com wrote: If I used modulo arithmetic how would the query optimizer know which table to include and exclude? For example say I did modulo 100 based on the field

Re: [GENERAL] What's a reasonable maximum number for table partitions?

2015-02-13 Thread Tim Uckun
: On Thu, Feb 12, 2015 at 7:44 PM, Tim Uckun timuc...@gmail.com wrote: Does anybody have experience with huge number of partitions if so where did you start running into trouble? I use an arbitrary 100-way split for a lot of tracking info. Just modulo 100 on the ID column. I've never had any issues

[GENERAL] What's a reasonable maximum number for table partitions?

2015-02-12 Thread Tim Uckun
The documentation says having too many partitions will end up being unproductive as it will cause the optimizer to examine all the tables for query planning. So I am wondering what's a reasonable upper limit? If I was to partition a table by day I would have 365 tables per year. Is that too

Re: [GENERAL] Partioning with overlapping and non overlapping constraints

2015-02-09 Thread Tim Uckun
This approach wouldn't work I think. The data keeps growing in the hot table. On Tue, Feb 10, 2015 at 6:01 AM, Melvin Davidson melvin6...@gmail.com wrote: Well, without knowing too much about your application, it certainly sounds like using the metricts_MMDD is the way to go. As for

Re: [GENERAL] Partioning with overlapping and non overlapping constraints

2015-02-09 Thread Tim Uckun
Don't you have duplicate information within your UTC, location and local_time data ? Maybe you can just attach a timezone to each location... Yes there is duplicate information but dealing with time zones are a PITA and the easiest way to solve the myriad of problems I have is to store the local

Re: [GENERAL] Partioning with overlapping and non overlapping constraints

2015-02-08 Thread Tim Uckun
, David G Johnston david.g.johns...@gmail.com wrote: Tim Uckun wrote 1. Should I be worried about having possibly hundreds of thousands of shards. IIRC, yes. 2. Is PG smart enough to handle overlapping constraints on table and limit it's querying to only those tables that have the correct

[GENERAL] Partioning with overlapping and non overlapping constraints

2015-02-05 Thread Tim Uckun
I have two partitioning questions I am hoping somebody can help me with. I have a fairly busy metric(ish) table. It gets a few million records per day, the data is transactional for a while but then settles down and is used for analytical purposes later. When a metric is reported both the UTC

Re: [GENERAL] Postgres seems to use indexes in the wrong order

2015-01-27 Thread Tim Uckun
Thanks for the tips. I'll make some adjustments On Tue, Jan 27, 2015 at 8:38 PM, Sameer Kumar sameer.ku...@ashnik.com wrote: On Tue, Jan 27, 2015 at 6:59 AM, Tim Uckun timuc...@gmail.com wrote: The query seems to first use the timestamp column which results in a huge number of records

Re: [GENERAL] Postgres seems to use indexes in the wrong order

2015-01-26 Thread Tim Uckun
The effective_cache_size is one gig. The others are not set so therefore the default. On Sun, Jan 25, 2015 at 6:08 AM, Sameer Kumar sameer.ku...@ashnik.com wrote: On Fri, Jan 23, 2015 at 3:04 PM, Tim Uckun timuc...@gmail.com wrote: Take a look at this explain http://explain.depesz.com/s

Re: [GENERAL] Postgres seems to use indexes in the wrong order

2015-01-26 Thread Tim Uckun
Kumar sameer.ku...@ashnik.com wrote: On Fri, Jan 23, 2015 at 3:04 PM, Tim Uckun timuc...@gmail.com wrote: Take a look at this explain http://explain.depesz.com/s/TTRN ​Adding some info on the query and table structure (and indexes) would be helpful here.​ The final number of records

[GENERAL] Postgres seems to use indexes in the wrong order

2015-01-22 Thread Tim Uckun
Take a look at this explain http://explain.depesz.com/s/TTRN The final number of records is very small but PG is starting out with a massive number of records and then filtering most of them out. I don't want to really force pg to always use the same index because in some cases this strategy

[GENERAL] Getting truncated queries from pg_stat_statements

2015-01-19 Thread Tim Uckun
I am trying to get some slow query information and the results from my query are truncated at 2047 characters. Some of my queries are very long so they get truncated in the select. Is there a way around this? Here is my query. WITH query_stats AS ( SELECT query::text,

Re: [GENERAL] Getting truncated queries from pg_stat_statements

2015-01-19 Thread Tim Uckun
19, 2015 at 6:41 PM, Tim Uckun timuc...@gmail.com wrote: Ok thanks. I am still on 9.3 so I'll adjust that setting. On Tue, Jan 20, 2015 at 12:15 PM, Peter Geoghegan peter.geoghega...@gmail.com wrote: On Mon, Jan 19, 2015 at 2:37 PM, Tim Uckun timuc...@gmail.com wrote: I am trying to get

Re: [GENERAL] Getting truncated queries from pg_stat_statements

2015-01-19 Thread Tim Uckun
Ok thanks. I am still on 9.3 so I'll adjust that setting. On Tue, Jan 20, 2015 at 12:15 PM, Peter Geoghegan peter.geoghega...@gmail.com wrote: On Mon, Jan 19, 2015 at 2:37 PM, Tim Uckun timuc...@gmail.com wrote: I am trying to get some slow query information and the results from my query

Re: [GENERAL] Managing Key Value tags on rows

2014-11-17 Thread Tim Uckun
wrote: On Wed, Nov 12, 2014 at 7:17 PM, Tim Uckun timuc...@gmail.com wrote: What is the most efficient way to model key value tags on records. The keys and values will be variable and changing over time. The values may be of various types (numbers, dates, strings etc

[GENERAL] Managing Key Value tags on rows

2014-11-12 Thread Tim Uckun
What is the most efficient way to model key value tags on records. The keys and values will be variable and changing over time. The values may be of various types (numbers, dates, strings etc). There will be filters and group by selects based on tag names and maybe even tag values. Postgres

[GENERAL] Hierarchical rollups.

2014-11-12 Thread Tim Uckun
I am looking to model a hierarchical structure where the parent nodes on each level will contain calculations based on immediate children's values. For example the parent row may have an average of all the child rows (the calculations will be more complicated than that of course). So every time a

[GENERAL] Fast data, slow data

2014-06-26 Thread Tim Uckun
I have a use case in which the most recent data experiences a lot of transactions (inserts and updates) and then the churn kind of calms down. Eventually the data is relatively static and will only be updated in special and sporatic events. I was thinking about keeping the high churn data in a

Re: [GENERAL] Fast data, slow data

2014-06-26 Thread Tim Uckun
cause vacuum problems. On Fri, Jun 27, 2014 at 2:49 AM, Shaun Thomas stho...@optionshouse.com wrote: On 06/26/2014 04:29 AM, Tim Uckun wrote: I don't think partitioning is a good idea in this case because the partitions will be for small time periods (5 to 15 minutes). Actually

Re: [GENERAL] Fast data, slow data

2014-06-26 Thread Tim Uckun
at this point. I don't want to say that the data is not important but if I drop one or two sensor readings it's not going to be the end of the world. On Fri, Jun 27, 2014 at 3:45 AM, Steve Crawford scrawf...@pinpointresearch.com wrote: On 06/26/2014 02:29 AM, Tim Uckun wrote: I have a use

Re: [GENERAL] DB alias ?

2014-03-23 Thread Tim Uckun
How can this be combined with other commands. For example pgsql doesn't like this psql service=test1 -d test_database -c some command the PGSERVICE=test1 psql blah blah works but seems cumbersome. Why isn't there a psql --service=blah option? On Thu, Jan 24, 2013 at 9:48 PM, Guillaume

[GENERAL] Database snapshots or clones for staging and testing.

2014-01-30 Thread Tim Uckun
Hi all. I have the following scenario I want to accomplish. In order to test a new branch of code I want to create a snapshot of the live database into a testing database. The code will be deployed after that and it may run some migrations which will change the schema of the database. The code

Re: [GENERAL] invisible dependencies on a table?

2013-12-14 Thread Tim Uckun
I cannot readily speak to why you are not seeing sequence ownership as a dependent when looking at the now-archive table definition. pgadmin knows it's a dependency because when you try to drop it you get a message saying so but I can't see it in the defintion of the table. BTW is there a

Re: [GENERAL] invisible dependencies on a table?

2013-12-14 Thread Tim Uckun
As I've marked here, both default expressions are depending on the sequence, but there's only one ownership dependency of the sequence on a column. To complete the switchover you'd need to use ALTER SEQUENCE ... OWNED BY ... to move that ownership dependency to the new table. Then the old

[GENERAL] invisible dependencies on a table?

2013-12-12 Thread Tim Uckun
I have a table foo. It has a serial column called id. I execute the following statement ALTER TABLE table_name RENAME TO archived_table_name; CREATE TABLE table_name (LIKE archived_table_name INCLUDING DEFAULTS INCLUDING CONSTRAINTS INCLUDING INDEXES); . Archieve the table

Re: [GENERAL] Timestamp with and without timezone conversion confusion.

2013-10-02 Thread Tim Uckun
I can aways convert it. I am just not getting the right offset when I convert. That's what's puzzling. On Wed, Oct 2, 2013 at 9:15 PM, Albe Laurenz laurenz.a...@wien.gv.atwrote: Tim Uckun wrote: I have the following query. [...] SELECT interval_start, (interval_start AT TIME ZONE

Re: [GENERAL] Timestamp with and without timezone conversion confusion.

2013-10-02 Thread Tim Uckun
Theodore tony.theod...@gmail.comwrote: On 02/10/2013, at 6:49 PM, Tim Uckun timuc...@gmail.com wrote: The reason for that is that in PostgreSQL there is no time zone information stored along with a timestamp with time zone, it is stored in UTC. That seems unintuitive. What is the difference

Re: [GENERAL] Timestamp with and without timezone conversion confusion.

2013-10-02 Thread Tim Uckun
I do think it would help to add it to the docs. The name TIMESTAMP WITH TIME ZONE clearly implies the time zone is stored in the field. One day there will really be a timestamp with time zone embedded in it field and I wonder what they will call that.

[GENERAL] Timestamp with and without timezone conversion confusion.

2013-10-01 Thread Tim Uckun
I have the following query. with parsed_data as ( SELECT devicereportedtime , DATE_TRUNC('minute', devicereportedtime - (EXTRACT(minute FROM devicereportedtime)::integer % 5 || 'minutes')::interval) as interval_start FROM systemevents WHERE devicereportedtime =

Re: [GENERAL] Shortcut evaluation in OR or IN

2013-05-07 Thread Tim Uckun
Thanks for the explanation. On Mon, May 6, 2013 at 8:43 PM, Jasen Betts ja...@xnet.co.nz wrote: On 2013-05-06, Tim Uckun timuc...@gmail.com wrote: --047d7b2e4ea07402b004dc034a3b Content-Type: text/plain; charset=UTF-8 Say I have a select like this. SELECT * FROM table where field

[GENERAL] Shortcut evaluation in OR or IN

2013-05-05 Thread Tim Uckun
Say I have a select like this. SELECT * FROM table where field = X OR field = Y limit 1 And I have two records one that matches X and one that matches Y will I always get X because the evaluation will stop after the first clause in the OR matches? What about for IN (X, Y) how about if I am

Re: [GENERAL] Using varchar primary keys.

2013-04-01 Thread Tim Uckun
how about using an enum instead of this table? That's an interesting idea. Are enums mutable?

Re: [GENERAL] Using varchar primary keys.

2013-04-01 Thread Tim Uckun
On Tue, Apr 2, 2013 at 8:35 AM, jesusthefrog jesusthef...@gmail.com wrote: On the topic of 'natural' versus 'synthetic' primary keys, I am generally in the camp that an extra ID field won't cost you too much, and while one may not need it for a simple table (i.e. id, name) one might add any

[GENERAL] Using varchar primary keys.

2013-03-31 Thread Tim Uckun
Consider the following scenario. I have a typical tagging structure. There is a table called tags, there is a table called taggings. The taggings table acts as a many to many join table between the taggers and the tags. The tags table has two fields id and tag. id is a serial data type. The

Re: [GENERAL] Using varchar primary keys.

2013-03-31 Thread Tim Uckun
Natural Keys have a purpose but when do they exist in the database? In my case it's the tags. Table tags has only two columns id and tag. Each one has a unique index. I have many other similar lookup tables. For example things like order_status, item_type etc. What about the Natural Keys

Re: [GENERAL] Why is this a cross join?

2013-02-18 Thread Tim Uckun
for every row in one of the tables? If so, how do you want to determine which of the matching rows in the other table to choose, and which to ignore? In this case it's a simple lookup. Any of the matching rows will do but I just want one. -- Sent via pgsql-general mailing list

[GENERAL] Why is this a cross join?

2013-02-17 Thread Tim Uckun
I have a seemingly simple query that returns way too many records. Here is the count query select count(*) from ( select crm.* from sap_crm_sales crm inner join model_configurations mc on left(crm.customer_class, 6) = left(mc.sap_code,6) ) as s The crm table has about 44K records,

Re: [GENERAL] Why is this a cross join?

2013-02-17 Thread Tim Uckun
Apparently the first 6 characters of those fields are quite common, which gives you a result for every possible combination of the same 6-character value. M. That seems kind of weird. Is there any way to NOT have this be a cross join? For example if I extracted the first six characters

Re: [GENERAL] Why is this a cross join?

2013-02-17 Thread Tim Uckun
In some way, every join is a cross join, with the results filtered according to the specificity of the join conditions. In this case: inner join model_configurations mc on left(crm.customer_class, 6) = left(mc.sap_code,6) customer_class sounds like a fairly generic sort of field, so

Re: [GENERAL] Upsert Functionality using CTEs

2013-02-11 Thread Tim Uckun
The best strategy is fixing your data-model so that you have a unique key. As you found out already, e-mail addresses aren't very suitable as unique keys for people. For this particular case I'd suggest adding a surrogate key. Alternatively, you might try using (first_name, email) as your

Re: [GENERAL] Upsert Functionality using CTEs

2013-02-11 Thread Tim Uckun
I read it that he has multiple sales from the same person? In which case pretending that the two sales were from different people isn't the correct result at all. Actually it turns out that both cases exist. I may be missing the point of the query, but wasn't it to add an entry for each

[GENERAL] Upsert Functionality using CTEs

2013-02-10 Thread Tim Uckun
I am using a query like this to try and normalize a table. WITH nd as (select * from sales order by id limit 100), people_update as (update people p set first_name = nd.first_name from nd where p.email = nd.email returning nd.id), insert into people (first_name, email, created_at, updated_at)

Re: [GENERAL] Dropping default privileges.

2013-01-30 Thread Tim Uckun
Thanks. Worked. On Wed, Jan 30, 2013 at 9:12 PM, Albe Laurenz laurenz.a...@wien.gv.at wrote: Tim Uckun wrote: drop role tim; ERROR: role tim cannot be dropped because some objects depend on it DETAIL: owner of default privileges on new relations belonging to role tim in schema strongmail

[GENERAL] Dropping default privileges.

2013-01-29 Thread Tim Uckun
I am unable to drop a user. drop role tim; ERROR: role tim cannot be dropped because some objects depend on it DETAIL: owner of default privileges on new relations belonging to role tim in schema strongmail ALTER DEFAULT PRIVILEGES IN SCHEMA strongmail REVOKE INSERT, SELECT, UPDATE,

Re: [GENERAL] Dropping default privileges.

2013-01-29 Thread Tim Uckun
drop role tim; ERROR: role tim cannot be dropped because some objects depend on it DETAIL: owner of default privileges on new relations belonging to role tim in schema strongmail DROP OWNED BY ought to get rid of that. Just to be clear. I don't want to drop the schema. Will DROP OWNED

Re: [GENERAL] Running update in chunks?

2013-01-26 Thread Tim Uckun
That would be one solution, but I think a better one would be to not store make_id in imports in the first place, but instead to always fetch it by joining imports to models at query time. My problem here is that the incoming data is quite messy so the join conditions become weird (lots of

Re: [GENERAL] Running update in chunks?

2013-01-25 Thread Tim Uckun
Sorry I haven't been responsive for a little while. I ran your script but creating a new schema instead of my existing schema. My timings were similar to yours (more or less) except fo the vacuums which took roughly 147891 ms. On Tue, Jan 22, 2013 at 3:21 PM, Kevin Grittner kgri...@mail.com

Re: [GENERAL] Running update in chunks?

2013-01-25 Thread Tim Uckun
What if you do: alter table cars.imports set (fillfactor=50); Before the vacuum full, and then try the update again? This makes a dramatic difference when combined with a vacuum. UPDATE 98834 Time: 3408.210 ms Ten times faster! -- Sent via pgsql-general mailing list

Re: [GENERAL] Running update in chunks?

2013-01-25 Thread Tim Uckun
autoanalyze_count | 25 On Wed, Jan 23, 2013 at 7:50 AM, Steve Crawford scrawf...@pinpointresearch.com wrote: On 01/21/2013 05:02 PM, Tim Uckun wrote: I already posted the schema earlier. It's a handful of integer fields with one hstore field. Oh well. I can't find it but maybe it got lost

Re: [GENERAL] Running update in chunks?

2013-01-25 Thread Tim Uckun
I agree that seems like the most likely cause. Each update to the row holding the hstore column requires adding new index entries for all the hstore elements, and autovacuum will need to clean up the old ones in the background. The best solution would be to either normalize the data instead

Re: [GENERAL] Running update in chunks?

2013-01-25 Thread Tim Uckun
That suggests (to me, at least) that it is related to index updating. Again, your GIN index seems primary candidate. Try running iostat (I think that's available on a Mac) with/without the fillfactor and with/without the GIN index while you do the updates. It's possible your SSD is just

Re: [GENERAL] Running update in chunks?

2013-01-21 Thread Tim Uckun
Can you try a couple of things just to check timings. Probably worth EXPLAIN ANALYSE. SELECT count(*) FROM cars.imports i JOIN cars.models md ON i.model_id = md.id; Takes about 300 ms CREATE TEMP TABLE tt AS SELECT i.* FROM cars.imports i JOIN cars.models md ON i.model_id = md.id;

Re: [GENERAL] Running update in chunks?

2013-01-21 Thread Tim Uckun
Just to close this up and give some guidance to future googlers... There are no foreign keys. The table is empty when I start. I copy the data into it from a text file. Removing indexes didn't help much (made a very slight difference). running a query CREATE TEMP TABLE tt AS SELECT using a

Re: [GENERAL] Running update in chunks?

2013-01-21 Thread Tim Uckun
Nope. If you have any suggestions I am all ears. For the purposes of this discussion we can narrow down the problem this update statement. Update imports set make_id = null. There are 98K records in the table. There is no index on the make_id field. Standard untouched postgresql.conf from the

Re: [GENERAL] Running update in chunks?

2013-01-21 Thread Tim Uckun
I'd be curious to see results of the same update on a standard HDD vs the SSD, and maybe on a more typical database deployment hardware vs a macbook air. I haven't tried it on any other machine yet. CREATE TEMP TABLE tt as SELECT ... takes eight seconds so presumably the disk is not the

  1   2   3   >