[SQL] "Truncate [ Table ] name [Cascade]"?
Hi there, Would it be possible to implement some kind of cascading truncate? As far as I understand, the "no truncate if table is referenced" change was introduced to ensure database integrity. However, if the referencing table is truncated, too, there should be no problems as far as foreign keys are concerned, correct? Another option that seems feasible to me: How about allowing truncates on tables that are only referenced (if at all) by ones with no entries? Since no data is actually "enforcing" the foreign key restriction, truncating the table should be safe ... The rationale behind this suggestion is that we need a quick way to purge the entries in all tables in order to accelerate the reinitialization of the tables for our unit tests. If you know of some better way to truncate all tables, please let us know it. Thanks for your time, Andi. ---(end of broadcast)--- TIP 8: explain analyze is your friend
[SQL] Notation of index
Hi, I'm moving from MySQL to Pg. MySQL lets me create indices like this: CREATE TABLE t ( id INTEGERNOT NULL, numba INTEGER NOT NULL, txtVARCHAR(100) NOT NULL, anosanumba INTEGER NOT NULL , PRIMARY KEY (id), INDEX (numba), UNIQUE (anosanumba) ); PostgreSQL doesn't like the line : INDEX (numba), Instead I have to use : CREATE INDEX idx_t_numba ON t (numba); outside the CREATE TABLE statement. And as far as I understand I do have to give an plain INDEX explicitely a name while Pg makes one up for the UNIQUE. The point is, that I want to use foreign keys and I figure it helped if those fk-columns were indexed but PG won't create indices for columns in fk-constraints automatically. Am I right ? Why can we use INDEX the same way as UNIQUE ? Perhaps even as in ... numba INT4NOT NULLINDEX ... ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[SQL] where to start with this procedure ?
Hi, I know a bit of SQL but not exactly a lot so I ran into this problem. I have tables on the server that get joined in a view. That's OK. Now I need just a couple of records say 10-100 of 30 000 which could easily be filtered by a integer key. As of now I have to pull the whole lot into Access and let it do the filtering. That doesn't play nice on our network. Could someone kick me into the right direction where to learn stuff like: function grabem(x integer) recordset ( grabem = select * from my_view where key = x ) ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [SQL] where to start with this procedure ?
Hi Andrei, Use the Offset and Limit in the SQL query. [...] SELECT select_list FROM table_expression WHERE condition LIMIT 50 OFFSET 1 This query will return 50 elements starting with the 1 elements... so the elemenst from 1 to 10050. That isn't the issue since I only need a specific few of the lines at all. I need a dynamic WHERE clause. It's more like this : SELECT customer_id, and, some, more, fields FROM table1 JOIN table2 JOIN table3 This results in the "huge" set. On the Access-form I only need the orders, or contact history for 1 customer. As of now I only know 2 ways to solve this. 1) Load all the tables over the net into Access and let the JOIN run locally. That way I can filter dynamically by adding a WHERE customer_id = x to the select above. 2) Let the JOINS run in a server based view and transfer the whole result into Access to fetch the few lines regarding the customer. There should be something like: 3) a) Access calls a server based function with the customer_id as parameter. b) The function calls the server based view and filters the result acording to it's parameter. c) The function sends only the actually wanted lines back to Access. ---(end of broadcast)--- TIP 8: explain analyze is your friend
[SQL] Multitable uniqueness ?
Hi folks, Is there a way to have something like this : UNIQUE (table_1.id, table_2.xxx) I got some tables that have a couple of foreign keys. Now I try to minimize those relationships to clean up the mess. :-} We do business fairs. (???) Like c-bit only a few magnitudes smaller. So we have projects and rent stalls to customers. customers (c_id, ...) projects (p_id,...) there is an relationcust_proj (cp_id, c_fk, p_fk, status_fk) with a UNIQUE constraint (c_fk, p_fk) A customer can have several orders, contacts, ... tied to a project. Those look like this stalls (stall_id, cp_id, stall_no, ...) o_idPRIMARY cp_fk FOREIGN KEY that ties to custmer and project stall_no is a varchar It should be unique within a project. Will I have to integrate the project.id into the stalls-table ? ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [SQL] Multitable uniqueness ?
Jean-Luc Lachance schrieb: Do you really need MANY-TO-MANY between customers and projects? I can see customers owning many projects, but do you really have projects belonging to many customers? In this case yes. projects ( 1, 'x-fair 2003'; 2, 'y-fair 2003'; 3, 'x-fair 2004') customer ( 1, 'X ltd'; 2, 'Y'; 3, 'Z') maybe all 3 have a stall on project 1. c1 and c2 attend project 2 where c2 has 2 stalls one inside and one outside the hall. I have the cust_project relation to tie other objects like proposals and letters to something small and common for all the communication to a customer. If not, fold cust_proj into projects. Otherwise, UNIQUE (cp_id, stall_no) on stalls should be enough. cp_id implies uniquness of (customer, project) couples. So with UNIQUE (cp_id, stall_no) we get --> UNIQUE (c_id, p_id, stall_no) This'd be too much. Stall_no is the number of the cubicles the customers rent. It needs to be unique within on project so that we can relate on a specific spot on the area to send visitors when they ask us and we print those numbers in the flyer. ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] Multitable uniqueness ?
Bruno Wolff III wrote: On Wed, May 26, 2004 at 05:13:14 +0200, Andreas <[EMAIL PROTECTED]> wrote: Is there a way to have something like this : UNIQUE (table_1.id, table_2.xxx) Postgres doesn't support database constraints at this time which is what you would need to do this simply. Well, a simple way was to have a table_1-foreign key in the table 2. I just thought since there is: table_1 <--1:n-- table_x <--1:n-- table_2 and I hoped not having to add table_1 <--1:n-- table_2 just to have uniquness of an attribute of table_2 and the key of table_1 You can enforce this constraint by creating a third table That is more efford than to stuff the column into table_2. Thanks anyway ;) ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [SQL] Database backup
sreejith s wrote: How to take Database backup from an application developed in Visual Basic thats running at Windows Client and Database resting at Linux Server. THanx You wrote a similar question some days ago and mentioned that pg_dump wouldn't work on Windows, so I figure you don't want to let the Linux host computer make the backups but your Windows client. You could go 2 ways. 1) Install pg_dump on your Windows client. Either as PostgreSQL on a CygWin environment or a Windows-native PostgreSQL 7.5 or 8.0 installation. Since you only need pg_dump you wouldn't configure a running PostgreSQL so this should be pretty easy. E.g. the Linux host's IP is 192.168.0.123. Then you can run on Windows : pg_dump --host=192.168.0.123 -U your_username your_database > your_database.sql 2) Let the Linux server do the backup and transfer the resulting backup file as soon as possible to your Windows client. a) The server could push it through SAMBA, SCP or FTP. b) The client could collect all backup files when it gets booted, if the server puts the files in a SAMBA share that the client can mount as network drive or access via FTP. I'd propose you take the alternative 2) b). ---(end of broadcast)--- TIP 8: explain analyze is your friend
[SQL] How to check date-interval constraints
Hi, I'd like to have a table that looks like this: my_option ( id serial primary key, myvalue double, valid_start timestamp, valid_stop timestamp ); I want to store values that are only valid in a given start-stop-interval so I could find a date-specific value for NOW() or some other given date. select myvalue from my_option where somedate between valid_start and valid_stop; How can I have a constraint, that prohibits nesting or overlapping intervals? 172006-1-1 2006-1-31 292006-2-1 2006-2-28 OK 352006-1-10 2006-1-20 BAD lies within line 1 432006-1-20 2006-2-10 BAD starts within line 1 and ends in line 2 To make it even more interesting, it'd be nice to add a type-column so I could ask: select myvalue from my_option where now() between valid_start and valid_stop AND mytype=42; Then interval should ONLY not overlap with other intervals of the SAME type. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[SQL] How update a table within a join efficiently ?
Hi, how would I update a table within a join in a more efficient way? E.g. the folowing case: table_a holds abstract elements. One column represents "priority" which can be based on information of other tables. table_b might hold such details in a column "size" for about 3000 of 8 records out of table_a. I'd like to do this: UPDATE table_a SET table_a.prio = CASE WHEN size >= 10 THEN 1 ELSE 2 END FROM table_a JOIN table_b USING (table_a_id) This doesn't work. But the folowing does, though it looks not efficient with those 3000 SELECTs instead of one preparing JOIN that fetches the relevant info. :( UPDATE table_a SET prio = ( SELECT CASE WHEN size >= 10 THEN 1 ELSE 2 END FROM table_b WHERE table_a.table_a_id = table_b.table_a_id ) WHERE table_a_id IN (SELECT table_a_id FROM table_b); Is there a better way? -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] Need magical advice for counting NOTHING
Hi, The source select counts log-events per user. All is well when a user has at least one event per log_type in the log within a given timespan. If one log_type is missing COUNT() has nothing to count and there is expectedly no result line that says 0. BUT I need this 0-line because of a crosstab. :( I need to know how to prevent in my crosstab categories on the right to slip to the left, when the left category is emptyy. Server 8.3.5 3 tables log (log_id, log_type_fk, user_fk, ts timestamp, ...) users (user_id, user_name, ...) log_type (log_type_id, log_type) There are 3 events as log_type. I naively tried SELECT user_name, log_type_fk, COUNT(log_type_fk) FROM log JOIN users ON (user_id = user_fk) WHERE (ts IS BETWEEN sometime AND another) GROUP BY user_name, log_type_fk ORDER BY user_name, log_type_fk This results e.g. to user1, type1, 2 user1, type2, 3 user1, type3, 7 user2, type1, 11 user2, type3, 17 but I needed also user2, type2, 0 How would I get there ? Regards Andreas -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] Need magical advice for counting NOTHING
nha schrieb: Hello, Le 23/07/09 10:23, Glenn Maynard a écrit : On Thu, Jul 23, 2009 at 1:01 AM, Andreas wrote: SELECT user_name, log_type_fk, COUNT(log_type_fk) FROM log JOIN users ON (user_id = user_fk) WHERE (ts IS BETWEEN sometime AND another) GROUP BY user_name, log_type_fk ORDER BY user_name, log_type_fk [...] SELECT user_name, log_type.log_type, sum((log_type_fk IS NOT NULL)::integer) AS count FROM users JOIN log_type ON (true) LEFT JOIN log ON (user_id = user_fk AND log.log_type_fk = log_type.log_type) GROUP BY user_name, log_type.log_type ORDER BY user_name, log_type.log_type; [...] In the same direction as Andreas Krestchmer and Glenn Maynard (thanks to the latter for DDL assumed statements), the following query should also suit: SELECT user_name, log_type, COUNT(log_type_fk) FROM (users CROSS JOIN log_type) LEFT JOIN log ON (user_id = user_fk AND log_type_id = log_type_fk) WHERE (ts IS BETWEEN sometime AND another) GROUP BY user_name, log_type ORDER BY user_name, log_type I tried it and found it looses the COUNT() = 0 lines because of the date filtering at this position. ts is a columns of log. The tricky lines are those log_types that aren't in log so (ts IS NULL) here and NULL can't be compared to timestamps so the interesting lines get filtered out of the LEFT JOIN's result and can't be counted. The folowing aproach works AFAIK. SELECT user_name, log_type_id, COUNT(log_type_fk) FROM (users CROSS JOIN log_type) LEFT JOIN ( SELECT user_fk, log_type_fk FROM log WHERE (ts BETWEEN sometime AND another) ) AS x ON (user_id = user_fk AND log_type_id = log_type_fk) GROUP BY user_name, log_type_id ORDER BY user_name, log_type_id This subselect gives a really considerable speed up, too. While it is formally assumed that user_id and log_type_id are respectively keys for users and log_type tables, it is semantically admitted here that user_name identifies user_id in users table and log_type identifies log_type_id in log_type table. Actually I didn't consider this key issue, yet. :} But, as the result gets stuffed in the crosstab function, the formally strict key log_type_id would do as column 2 as well, or maybe better being a number instead of a varchar. I'll even change the first column to user_id and wrap another JOIN users around so I don't get messed up by users with the same name. Thank you and Andreas Krestchmer and Glenn Maynard for giving me a new view on things. :) -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] How fetch multiple rows into one text-field?
Hi, how can I fetch multiple rows into one text-field? I need the contents of some log-infos condensed into a single text to show in a report. There is a log with a single record per event. The log is like this (log_id, case_id, log_date, log_category, log_notes) I need a date-ordered list of log-date, -category, -notes of all log-events of the same case-id like log_date || ' ' || log_category || ' : ' || log_nots e.g. 20.07.2009 Category 17 : utterly noteworthy notes 21.07.2009 Category 42 : lots more 22.07.2009 Category 17 : still more drivel The report consists of a couple of case-infos and should look like case_id, case_name, case_all_log_events Is there a way to do this? regards andreas -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] Howto automatically define collumn names for a function result.
Hi, wouldn't it be great to have functions return "setof something" as result where "something" was determined out of the result of a SELECT within the function? like CREATE OR REPLACE FUNCTION coffee(integer, timestamp, timestamp) RETURNS SETOF AS $BODY$ SELECT staff_id, name, room, COUNT(coffee_id) AS cupcount FROM staff JOIN coffee_log ON staff_fk = staff_id WHERE (staff_id = $1) AND (coffee_time BETWEEN $2 AND $3) GROUP BY staff_id, name, room ORDER BY name; $BODY$ LANGUAGE 'sql' STABLE There the SELECT dumps a constant set of collumns where as far as I know have to be defined as a type to make SETOF happy or define the names whenever I call the function which would be tedious. Actually this is a pretty simple example of some reports I need to produce. They have around 60 collumns and there is also an aggregate and filtering on an id as well as 2 timestamps. Since the aggregate depends on id and timestamps too, it is no solution to build a view and select from that within the function. -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] Need magic for a moving statistic
Hi, I need some magic for a moving statistic that works on a rather big table starting at a given date within the table up until now. The statistic will count events allways on fridays over periods of 2 weeks before ... biweekly? So I'd like to get a line every 2 weeks for everthing between. I sadly don't know how to spell that does: collect data where insert_date between friday1 and friday1 + inteval '2 week' collect data where insert_date between friday1 + inteval '2 week' + inteval '1 second' and friday1 + inteval '4 week' collect data where insert_date between friday1 + inteval '4 week' + inteval '1 second' and friday1 + inteval '6 week' Is this possible ? regards -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] SQL moving window averages/statistics
Just 3 points ... 1) don't use "date" as a column name because it's a data type. 2) to_char(current_date, 'MM')||to_char(current_date, 'DD') is equivalent to to_char(current_date, 'MMDD') 3) you should get the same result with ... where icao='KSFO' and (EXTRACT (MONTH from date) = 9) and (EXTRACT (DAY from date) BETWEEN 23 AND 29)) group by ... Then you lost me with your 3 day idea=8-} It might be depressingly slow but depending how time critical the report is, you could do something like select distinct (date) date, (select max(dc1.tmax) from daily_climate as dc1 where dc1.date between (dc0.date - interval '1 day') and (dc0.date + interval '1 day')) as max_tmax, (select min(dc1.tmax) from daily_climate as dc1 where dc1.date between (dc0.date - interval '1 day') and (dc0.date + interval '1 day')) as min_tmax, . from daily_climate as dc0 That's just something that might get you a result. I didn't try it out. Kai Carter schrieb: I'm currently have an sql statement that selects a week of descriptive statistics for various historical weather variables, sorted by date. SELECT to_char(date, 'MM')||to_char(date, 'DD') as date, max(tmax) as max_tmax, min(tmax) as min_tmax, avg(tmax) as mean_tmax, stddev(tmax) as std_tmax, count(tmax) as count_tmax, sum(tmax) as sum_tmax, variance(tmax) as var_tmax FROM daily_climate where icao='KSFO' and (EXTRACT(MONTH from date) =9 and EXTRACT(DAY from date) = 23) or (EXTRACT(MONTH from date) = 9 and EXTRACT(DAY from date) = 24) or (EXTRACT(MONTH from date) = 9 and EXTRACT(DAY from date) = 25) or (EXTRACT(MONTH from date) = 9 and EXTRACT(DAY from date) = 26) or (EXTRACT(MONTH from date) =9 and EXTRACT(DAY from date) = 27) or (EXTRACT(MONTH from date) = 9 and EXTRACT(DAY from date) = 28) or (EXTRACT(MONTH from date) = and EXTRACT(DAY from date) = 29) group by date order by date; The problem is that I only have 36 years of data to work with, and I would prefer to have a sample of ~100 rather than 30. So the idea would be to have a sample statistics for each day made up of 3 days: the current day, the day previous and the day after. Is it possible to get this sort of a result with one select statement? Thanks in advance for your responses, Kai Carter -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] Need magic for a moving statistic
A. Kretschmer schrieb: [...] Or simpler: test=*# select ((d-'2009-10-02'::date)/14) || '. period' as period, sum(value) from foo group by 1; This is tricky, because you use a text-column to sort but need numerical sorting since there will be more than 0-9 periods. This is still the way to handle the periodity without pulling out some external skript language. Thanks for the reply. :) -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] Need even more magic. Now for tricky counts.
Hi, there is a vast log-table that collects several state data for objects. (log_id, project_fk, object_fk, state_fk, log_type_fk, created_on::timestamp, ...) log_id is a sequence, project_fk foreign key on a project-table object_fk foreign key on a object-table state_fk can have 10 values0, 10, 20, 30, ... log_type_fk describes the event that caused the entry I need counts of states of objects for a project starting at a given date t0 in 14 days distances. Because I need a row for every reporting day, I started out by creating a view that selects the relevant project_fk and only those log_type_fk that MIGHT be relevant. state_fk = 0 is irrelevant, too. The same view does a case when ... for every state_fk so that I can add them up later to get a cross-table. e.g. case when state_fk = 10 then 1 else 0 end as sate_10, case when state_fk = 20 then 1 else 0 end as sate_20, ... Then the view adds a integer-column period_nr that represents the nr of 14 day periods since t0. In the first 14 days have period_nr = 0, in the second 14 days it is 1 and so on. Now I need a query that calculates the sum for every column state_10, state_20, ..., state_90 from t0 to the current period_nr. t0 until t0 + 1 * 14 days ===> count(state_10), count(state_20), count(state_30) ... t0 until t0 + 2 * 14 days ... This would be nice. I'd be glad if you could hint me up to here. Even nicer would be a solution that adds just the last occurance for every object_fk within the current t0 - period. e.g. object_fk = 42 might appear in period 1 with state 50 in period 3 twice with state 40 and 20 The report should count it in period 1+2 as 50 and in period 3 and further just 1 time as 20 until the object gets logged again. This might prove to be a wee bit tricky. -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] constants in 2-column foreign keys or how to design a storage for text-groups ?
Hi, my frontend has a lot of combo- and listboxes where one can chose a textsnippet that represents a key-number which is stored in several tables as foreign-key attributes. Those textsnippets are usually semantically grouped in 2-10 strings that belong together somehow. stupid example: --- color: red, green, blue size: tiny, little, big, giant structure: hard, soft, floppy now I'd like to build tables like thing( color_fk foreign key to color, size_fk foreign key to size, structure_fk foreign key to structure, sometext, atimestamp ...) so far no problems. With time those little text-list-tables clutter up the database so I'm thinking about one big text-storage that has the groups represented by a number like: snippets (snippet_id, snippet_group_nr, snippet) (100, 1, red), (101, 1, green), (102, 1, blue), (200, 2, tiny), (201, 2, little), ... Simple foreign-keys still work nicely but they cant prohibit that I store id-values from wrong groups. Here color_fk would only be correct if the id is out of group 1. The foreign key doesnt catch it if I put a group-3-id into color_fk. Id be cool to be able to have constants in 2-column foreign keys like color_fk integer not null default 0 FOREIGN KEY (color_fk, 1 ) REFERENCES snippets (snippet_id, snippet_group_nr) This throws an error. So this approach might be not advisable. I could add an additional column for every foreign-key that stores constant group-ids then I can have 2-column-fk but this looks bloated since those extra columns would hold eternally the same number in every row. How would I solve the rather common text storage issue? -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] Re: constants in 2-column foreign keys or how to design a storage for text-groups ?
Jasen Betts schrieb: On 2009-12-09, Andreas wrote: ... stupid example: --- color: red, green, blue size: tiny, little, big, giant structure: hard, soft, floppy How would I solve the rather common text storage issue? have you considered using enumerated types instead? Yes, but I need those texts in the GUI to show them as listboxes or comboboxes. There might be changes too when I later need to add or drop an option of a group. E.g. there are questionnaires to model. Lets say 10 questions where each has a couple of predefined answers where one should be selected. So I've got to store every group of possible answers to a question either in a seperate table or in a kind of repository all within one big table (row_id, questionnare_id, question_id, answer_nr, answer) The row_id so I just need to store 1 value per answer. Now I've got to make sure that it is impossible that accidentally there gets a question-17 answer connected to a question-42 and above all, that the questions dont get mixed between the questionnaires. I can do this with the frontend and enough hope that nothing bad will happen. Still I'd rather hardwire the integrity into the table design. -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] How to cascade information like the user roles ?
Hi, I need something like the user-roles of PG to store options of my users. I guess i need a table with roles, options and one that stores the refernces from roles to options. roles (role_id, role_name) option (option_id, option_name) role_has_option (role_fk, option_fk) so far is easy. Now I can let role1 have option1 and option2 ... But I'd further like to let role2 inherit role1's options and also have option3. role_inherits_role (parent_role_fk, child_role_fk) 1, 2 What SELECT would deliver all options for role2 inkluding the inherited ones? like role_id, option_id 2, 1 2, 2 2, 3 -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] How to cascade information like the user roles ?
Filip Rembiałkowski schrieb: 2010/1/19 Andreas mailto:maps...@gmx.net>> Hi, I need something like the user-roles of PG to store options of my users. I guess i need a table with roles, options and one that stores the refernces from roles to options. roles (role_id, role_name) option (option_id, option_name) role_has_option (role_fk, option_fk) so far is easy. Now I can let role1 have option1 and option2 ... But I'd further like to let role2 inherit role1's options and also have option3. role_inherits_role (parent_role_fk, child_role_fk) 1, 2 What SELECT would deliver all options for role2 inkluding the inherited ones? like role_id, option_id 2, 1 2, 2 2, 3 select role_fk as role_id, option_fk as option_id from role_has_option where role_fk = 2 union select inh.child_role_fk, opt.option_fk from role_has_option opt join role_inherits_role inh on inh.parent_role_fk = opt.role_fk where inh.child_role_fk = 2 Thanks. I am looking for a more general solution that expands even multiple steps of inheritance like a more complex example: role_1 --> option_1 + option_2 role_2 --> option_3 and inherits role_1 role_3 --> option_2 + option_4 role_4 --> option_5 and inherits role_2 and role_3 I need a general solution that gives all options for any given role including every inherited options over a unlimited hierarchy of parents. Sounds complex, I know, but this is what PG does with its user-roles. So I'd do in this example a SELECT ... WHERE role_id = 4 and get 4, 5 directly 4, 3 from role_2 4, 1 from role_1 over role_2 4, 2 from role_1 over role_2 4, 2 from role_3 (inherited double occurance) 4, 4 from role_4 -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] How to cascade information like the user roles ?
Thanks a whole lot and some :) It's great that you actually did a working script. I find it tremendosly easier to learn with a working example than with some links to other documentation which makes or does not make sense. I've got a 8.4 server so both ways work nicely. Is there a way to prevent inheritance loops instead of denying parent_id > child_id ? Parallel inheritance has to work, though. 1 <-- 2 and 3 <-- 4 I found the function running into a stack overflow when I intetionally created a loop to check what'll happen. At least this is a predefined limit in max_stack_depth. The with recursive thingy just kept going which generally tends to be worse, I guess. Regards :) -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] Howto have a unique restraint on UPPER (textfield)
Hi, is there a way to define a unique restraint on UPPER (textfield)? E.g. mytable ( name_id serial PRIMARY KEY, name varchar(255), UNIQUE ( upper (name) ) ) psql throws a syntax error because of the upper() function. I need to prohibit that 2 of strings like cow, Cow, CoW appears in the name-column. -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] Howto have a unique restraint on UPPER (textfield)
Joshua Tolley schrieb: On Sun, Jan 31, 2010 at 03:26:14AM +0100, Andreas wrote: Hi, is there a way to define a unique restraint on UPPER (textfield)? E.g. mytable ( name_id serial PRIMARY KEY, name varchar(255), UNIQUE ( upper (name) ) ) psql throws a syntax error because of the upper() function. I need to prohibit that 2 of strings like cow, Cow, CoW appears in the name-column. Like this: 5432 j...@josh# create table c (d text); CREATE TABLE 5432 j...@josh*# create unique index c_ix on c (upper(d)); CREATE INDEX 5432 j...@josh*# insert into c (d) values ('text'); INSERT 0 1 5432 j...@josh*# insert into c (d) values ('tExt'); ERROR: duplicate key value violates unique constraint "c_ix" Thanks for clearing this up. :) It works with CREATE UNIQUE INDEX. So I had the missconception that UNIQUE (...) within CREATE TABLE (...) was actually just an shorter way to define a unique index which it is not. -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] Howto have a unique restraint on UPPER (textfield)
Tom Lane schrieb: Andreas writes: So I had the missconception that UNIQUE (...) within CREATE TABLE (...) was actually just an shorter way to define a unique index which it is not. Well, it is that --- it just doesn't provide access to all the features that CREATE INDEX does. So as it is a shortcut for "create index" then why would the function call of upper not be accepted when the sql parser maps the uniqe-constraint into the "create index" command? The parser could just take everything in the ( ) and use it as is. Somehow there must be a notice in the meta data to mark the difference. pgAdmin shows a unique as constraint but no index when created within "create table". The unique-index only shows up when created seperately. regards Andreas -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] Howto get a group_number like row_number for groups
Hi I'd like to have an alternating colorindex in the output of a query that consecutive rows that are the same within a colum the same number. The query generates a readable output from a log-table and a few others that hold referenced texts. log (log_id int, log_event_fk int, object_fk int , ts timestamp) events (event_id int, event text) objects (object_id int, object text, is_active bool) the easy part :) mind the the ordering is not primarily based on the timestamp select log_id, event, object, ts from log join events on event_id = log_event_fk join objects on object_id = object_fk where object.is_active order by object, ts Now I'd need a dynamically generated column that alternates between 0 and 1 so that I can later color the rows where object is the same. row_number() over (order by object, ts) % 2 or rank() over (order by object, ts) % 2 produces the 0/1 alternation for rows When I create a subselect for objects that adds the colorindex and join this to the log instead of objects, I get the group-color only if I omit the sorting on the timestamp. When I order the outer select by object, ts the colorindex gets 0 in every row. :( I'd like to get something as this 3, up, dev3, 2010-4-2 10:00, 0 8, down, dev3, 2010-4-2 14:00, 0 9, down, dev3, 2010-4-2 15:00, 0 1, up, dev7, 2010-4-2 09:00, 1 5, down, dev7, 2010-4-2 17:00, 1 2, up, dev11, 2010-4-2 12:00, 0 7, down, dev11, 2010-4-2 13:00, 0 . . regards :) -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] How to find broken UTF-8 characters ?
Hi, I regularly have to import from Excel files, that hold rather simple text and number columns. That works. Occasionally there are unwanted special characters at the end of text-columns that Exel and pgAdmin either show as a upward arrow with a short leg on top 90° to the right or others are invisible with UTF-8 aware programs or get dispayed as ? by Access. A text viewer shows "ÔÇÄ" or E2 80 8E in Hex for the invisible thingy. My database is unicode so it doesn't mind those freak-chars. The problem rises when I need to export those records to CSV with pgAdmin. pgAdmin complains about not beeing able to store those lines in the local charset. How can I find those broken UTF-8 characters? How can I get rid of them? -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SPAM]-D] [SQL] How to find broken UTF-8 characters ?
Am 26.04.2010 12:12, schrieb silly sad: On 04/26/10 04:12, Andreas wrote: looks like a complete offtopic Not anymore. The bad signs are in the DB now. I'd need some command that filters somehow for inconvertible (Unicode-->local charset) data. How can I find those Unicode characters that allready sneaked in? Actually there shouldn't be anything within the tables that NEED to be coded in Unicode. something like SELECT * FROM tab_1 WHERE field_x <> ConvertToLocal(field_x) might be a good start. How can I get rid of them? iconv -c AFAIK iconv would translate on file system level but I would think that messed up a allready messed up Excel workmap even further. I'd be glad to handle csv, too. BUT u should not have those characters at all if one is occured it most probably an error Sure, but those files hit me over a chain of people who consider it ok to convert data over numerus file formats, cut, edit, save as X, send per mail then hit me and I am the one to clean up. AND u should get rid of this error itself -- not of its consequences. Like quitting the job and grow flowers instead? I'll consider this. ;) -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SPAM]-D] [SQL] How to find broken UTF-8 characters ?
Hi, while writing the reply below I found it sounds like beeing OT but it's actually not. I just need a way to check if a collumn contains values that CAN NOT be converted from Utf8 to Latin1. I tried: Select convert_to (my_column::text, 'LATIN1') from my_table; It raises an error that says translated: ERROR: character 0xe28093 in encoding »UTF8« has no equivalent in »LATIN1« I'd like to select all those records. When I know which record has faulty content I can correct it. If this is really OT on the SQL list then please tell me where to ask. Am 28.04.2010 15:18, schrieb Justin Graf: On 4/26/2010 8:41 AM, Andreas wrote: How can I get rid of them? iconv -c AFAIK iconv would translate on file system level but I would think that messed up a allready messed up Excel workmap even further. I'd be glad to handle csv, too. I would look at a macro/script to have excel dump the data out in CSV then move data to into Postgres It's like this. I load the spreadsheet into an Access-DB and let a VBA skript stuff the data into PG via ADODB/ODBC. Often I have to clean up more obvious things than obscure characters or amend the info out of other sources before I can upload it to PG. Now these are not illegal UTF chars. If those values where wacky Postgresql would not have allowed you insert the record. Ô = utf code 212, Ç = utf code 199, Ä = utf code 196 Those are even in Latin1. They were only 1 example. I suppose where I find them the 3 codes form a multibyte code that can't be displayd or don't get displayd as a usual letter but some symbol or asian-looking thing which definately doesn't belong there. I saw occasionally that such a wacky symbol replaced some other signes that are language specific like ä, ö, ü. Then the next sign is missing too, so something is mixing up the encoding and combines 2 chars into 1 utf8-code. To force a string into a specific encoding we have the Covert, Convert_From and Cover_to see section 9.5 in the help files The problem is, that pgAdmin complains those signes aren't convertible and drops the whole record out of the result of the select that I'd like to dump into a csv. Select covert('MyUtf8', 'UTF8', 'LATIN') or Select covert_to('MyUtf8', 'LATIN') I found them before but didn't understand their output. e.g. Select convert('1aäßx', 'utf8', 'LATIN1') ; Result = "1a\344\337x" so it translated ä = 344 and ß = 337. The other 3 are just as they were before. How can this be valid in a single byte charset like Latin1? Especially as ä, ß are E4 and DF. Why do they come out as escaped codes when they are in Latin1 aswell as 1, a and x? What ever pg client library used to move Excel data to PG my have incorrectly converted some of the data or moved formatting information into the database. I have seen Access and Excel do mightily odd things when connecting to DB's I don't know about current versions but 2000 and 2003 Excels did really stupid things when trying to write to DB's including MSSQL. Cute ... we use Access 2000 and 2003 :( -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] How to find events within a timespan to each other?
Hi, is there a way to find events in a log that happen within a certain timespan to each other? Log is like this event_idinteger not null default 0 primary key event_type_idinteger not null default user_idinteger not null default 0 event_ts timestamp(0) I need every event of a type that happened more often than one time within 5 minutes of another one of the same user. 173 1 ... 12:00 182 193 1 ... 13:03 203 2 ... 13:03 213 1 ... 13:04 222. 233 1 ... 13:05 242 1 ... 13:06 E.g. the checked event_typ_id may be 3 then the result should be line 19, 21, 23 regards Andreas :) -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] How would I store little key-Nr to text lists?
Hi how would I store little key-Nr to text lists? Like e.g. colors 1 red 2 green 3 blue Maybe I later also need to add 4 yellow? Obviously the numbers are used for foreign keys in data tables and the texts appear in selects. On the other hand users should chose from listboxes in an application so I need to be able to read the (key, ext) tupels. For now I use 2 column tables that in selects get joined to a data table. Is there a more clever way ? -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] howto delete using a join ?
Hi, is there a way to delete from a table using information from another table to decide if a row should be dropped? In my case there is a log events ( event_id, event_type_fk, ...); event_types ( event_type_id, relevance_level ); Now I'd like to delete all old events with certain relevance_levels but not all! This works: delete from events where event_id in ( select event_id from events join event_types on event_type_id = event_type_fk where relevance_level in ( 1, 3, 5, 7) and create_ts < '2010/01/01' ); The following doesn't work but is there a more direct way which doesn't involve a subselect? delete from events join event_types on event_type_id = event_type_fk where relevance_level in ( 1, 3, 5, 7); -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] Help Need some hindsight
Hi, I need to display log events (again). The log is simply like this log ( log_id serial primary key, create_ts timestamp default localtimestamp, object_id, state_id, ... ) It records the state of objects and when and what happend to to change this state. I'd like to get a list that shows the current state at any point of time and the state of the last event before regarding the current object_id. The tricky bit is that both states should appear in the same row for every row. Help? :) -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] is there a distinct function for comma lists ?
Hi, is there a distinct function for comma separated lists ? I sometimes need to update tables where I got a set of IDs, like: update mytable set someattribute = 42 where mytable.id in ( 1, 2, 3, 5, 7, 11, 3, 6, 13, 13, 3, 11 ... ) So there are double entries in the list but in this case its just overhead but no problem. But for calculated values this would not allways be desirable. update mytable set someattribute = someattribute + 1 where mytable.id in ( 1, 2, 3, 5, 7, 11, 3, 6, 13, 13, 3, 11 ... ) How could I get a distinct list? Those lists can have 2000-3000 IDs sometimes. One solution was as follows but perhaps there is something more elegant? update mytable set someattribute = someattribute + 1 where mytable.id in ( select distinct id from mytable where id in ( 1, 2, 3, 5, 7, 11, 3, 6, 13, 13, 3, 11 ... ) ) And as bonus ... is there a way to find IDs that are in the list but not in the table without creating a temporary table and use a join? -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] HowTo divide streetname from house-nr ?
Hi, how could I divide streetnames from housenumbers ? I have to deal with input like this: Parkstreet 42 Parkstr. 42 Casle Avenue 42 Casle Str. 42-47 Casle Str. 54 - 55 probaply even Casle Str. 42-47 a Perhaps one could cut ap the 1st numeric char and regard everything left of it as the street name and the rest as house number. OK, this would fail with "42, Parkstreet" but those aren't to frequent. How would I do this? -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] Need magic for inserting in 2 tables
Hi, I need to insert a lot of basically blank records into a table to be filled later. Sounds silly but please bear with me. :) projects ( project_id, project_name, ... ) companies ( company_id, ... ) departments ( department_id, department ) staff ( staff_id SERIAL, company_fk, department_fk, ... ) company_2_project ( project_fk, company_fk ) staff_2_project ( project_fk, staff_fk, project data, ... ) So with this I can store that company 99 belongs e.g. to project 3, 5 and 42 and staff_id 11, 13, 17 belongs to company 99. staff_2_project represents the connection of staff members to a project and holds projectrelated infos. Now say I have allready 100 companies out of the bigger adress pool connected to project 42 and I now want to add blank staffers out of department 40 and 50 linked with this project. I do step 1: insert into staff ( company_fk, ..., department_fk ) select company_fk, ..., department_fk from departments, companies, company_2_project AS c2p where company_id = c2p.company_fk and c2p.project_fk= 42 and department_id in ( 40, 50 ); step 2 would be to link those new blank staff records to project 42 by inserting a record into staff_2_project for every new staff_id. How can I find the new staff_ids while making sure I don't insert ids from other sessions? Is there an elegant way in SQL ? -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] Need magic for inserting in 2 tables
Am 04.10.2010 01:46, schrieb Scott Marlowe: On Sun, Oct 3, 2010 at 4:14 PM, Andreas wrote: insert into staff ( company_fk, ..., department_fk ) select company_fk, ..., department_fk from departments, companies, company_2_project AS c2p where company_id = c2p.company_fk and c2p.project_fk= 42 and department_id in ( 40, 50 ); step 2 would be to link those new blank staff records to project 42 by inserting a record into staff_2_project for every new staff_id. How can I find the new staff_ids while making sure I don't insert ids from other sessions? Is there an elegant way in SQL ? Use returning? insert into . yada returning field1, field2, field3 It seams the inserts can't be chained? :( The inner insert works when I run it separately but when I run the chained inserts I get an syntax error. How can a script use what RETURNING dumps out? I tried a bit but got nowhere. insert into staff_2_project ( staff_fk, project_fk ) insert into staff ( company_fk, ..., department_fk ) [...] returning staff_id, 42 as project_fk; and insert into staff_2_project ( staff_fk, project_fk ) ( insert into staff ( company_fk, ..., department_fk ) [...] returning staff_id, 42 as project_fk ) as s; and insert into staff_2_project ( staff_fk, project_fk ) select staff_id, project_fk from ( insert into staff ( company_fk, ..., department_fk ) [...] returning staff_id, 42 as project_fk ) as s; -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] Need magic for inserting in 2 tables
Am 04.10.2010 02:58, schrieb Scott Marlowe: The same way it would use the output of a select, it's a record set. So it's x rows by y columns. Then where were my insert statements wrong? Please, look this is a simple but complete example and show me my error. create temporary table table_1 ( id_1 serial primary key, txt text ); create temporary table table_2 as select 42::integer as id_2; ALTER TABLE table_2 ADD CONSTRAINT t2_pkey PRIMARY KEY( id_2 ); create temporary table t1_t2 ( fk_1 integer references table_1 ( id_1 ), fk_2 integer references table_2 ( id_2 ) ); -- delete from table_1; insert into t1_t2 ( fk_1, fk_2 ) insert into table_1 ( txt ) values ( 'A' ), ( 'B' ), ( 'C' ) returning id_1, 42; The inner insert works and dumps the inserted ids along with the constant which is needed in the outer insert as reference to the project. Both inserts run together give an error. -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] Is there a conditional string-concatenation ?
Hi, Is there a conditional string-concatenation ? I'd like to have an elegant way to connect 2 strings with some 3rd element between only if there really are 2 strings to connect. e.g. MyCat ( 'John', '_', 'Doe' ) --> 'John_Doe' while MyCat ( 'John', '_', '' ) --> 'John' MyCat ( '', '_', 'Doe' ) --> 'Doe' MyCat ( '', '_', '' ) --> NULL It should treat NULL and '' equally as empty and it should trim each of the 3 elements. so MyCat ( ' John ', '_', NULL ) --> 'John' MyCat ( 'John', NULL, 'Doe' ) --> 'JohnDoe' -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] How to collect text-fields from multiple rows ?
Hi, how can I collect text-fields from multiple rows into one output row? I'd like to do an equivalent to the aggregate function SUM() only for text. The input is a select that shows among other things a numerical column where I would like to group by. The text column of all rows in a group should get concatenated into 1 text devided by a '\n'. Even better would be if I could add a second text colum per line as topic. Input e.g. select group_nr::integer, memo::text, topic::text ... 1, 'bla ', 'weather' 2, 'yada..', 'weather' 2, 'talk talk..', 'cooking' 2, 'words words, ...', 'poetry' 3, Output: 1, 'weather\nbla...' 2, 'weather\nyada..\ncooking\ntalk talk..\npoetry\nwords words, ...' Even better would be to get some kind of line-chars as optical devider between the topics. Lets say 10 = above all topic-lines. 2, '==\nweather\nyada..\n==\ncooking\ntalk talk..\n==\npoetry\nwords words, ...' -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] insert into table from list or array ?
Hi, is it possible to insert into a table from list or an array ? Suppose there is a set of numbers that might be IDs of tables within the DB. To work with them I'd need a temporary table that just holds a single column with those numbers. Something like create temporary table tmptable as select id from ( 2, 3, 5, 7, 11, 13, 17, 19, 23, 31, 37, ... ); would be great. I get those numbers as textfile with 10 numbers per line and devided by comma+space as in the sample above, though the comma+space is negotiable if this were an issue. :) -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] insert into table from list or array ?
Am 18.10.2010 20:14, schrieb Pavel Stehule: 2010/10/18 Andreas: is it possible to insert into a table from list or an array ? yes, it's possible INSERT INTO tmptab SELECT v FROM unnest(string_to_array('1,2,4,2,1',',')) g(v) Thanks Pavel, though I'm just not yet 100% aware how it works but it works. :) -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] How to search for a part of a number
Hi, I'm wondering if there was a clever way to find parts of a numeric string in another table. There is a table that holds city-codes and city-names. City-code would be the part of a phone number that identifies the city. Over here this code can have 2 - 5 digits. So the table would contain: 23 ; A-City 345 ; B-Town 4455 ; C-Village 632 ; D-Town ... I'm quite sure the numbering system is bound to be spanning a search tree. So there are 2 tasks: a) I'd get 445598765 in and like to know the corresponding city b) I'd like to get the number nicely formatted : 4 4 55987-65 --> 4455 / 98765 Obviously one could do it on the client side. As I dont know how many digits in the input are relevant, I need to try the shortest code first. 1) search for 44 2) add 5 and search for 445 3) add the 2nd. 5 and search for 4455 BINGO Is there a way to do it within the db? -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] Sorting router interfaces
Am 01.11.2010 13:15, schrieb Brian Sherwood: I am trying to sort router interface names. The problem is that I am doing a text sort and need to do a numerical sort. What I get instead is the following text ordering: GigabitEthernet1/0/1| 1/0/1 | {1,0,1} GigabitEthernet1/0/10 | 1/0/10| {1,0,10} GigabitEthernet1/0/11 | 1/0/11| {1,0,11} GigabitEthernet1/0/12 | 1/0/12| {1,0,12} GigabitEthernet1/0/13 | 1/0/13| {1,0,13} This was the easy part. Suppose those lines above were the input table "interfaces" and the columns were called c1, c2, c3. Since c3 is allready an array you could do this: select * from interfaces order by (c3::integer[])[1], (c3::integer[])[2], (c3::integer[])[3] Records of this type lc-5/2/0.32769 | 5/2/0.32769 | {5,2,0.32769} work with: order by (c3::float[])[1], (c3::float[])[2], (c3::float[])[3] Now you "just" need to identify those records which wont produce such nice numerical arrays. Then split the two sets up, sort them in separate selects, add a set_nr and a row_number() as row_nr. Then UNION both sets together again AND eventually do an ORDER BY set_nr, row_nr and you are allready done. OK, that would be just an idea :) -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] unexpected ORDER BY
Hi, I've got an PG 8.4 on a opensuse box running. The DB is unicode german. There is a text column I'd like to order by but as far as I see PG ignores special chars and uses only characters and numbers. E.g. I get : S&T C... S&T E... STP <-- ??? STP A <-- ??? S&T P... S&T R... Can I modify the SELECT or even better switch a global setting that PG uses all chars to sort? -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] multi table import from 1 denormalized source table
Hi, I frequently get tables from spreadsheets to import into the DB. Usually it looks like this: A1, A2, A3, A4, B1, B2, B3, with optional C1, C2, D1, D2, ... and there is a 1:n relation between A and B. If provieded the C would be 1:1 to A and D 1:1 to B. Up until now I let a VBA script order the source table by A, then scan the table line by line and create a new entry in the target table A* and fetch its serial ID everytime the script figures that A changed. With this IDa create 1 C* and as many B*s until A changes again ... and of course fetch IDb to attach the D* records with a foreign key column. Now I'm trying to get away w/o the VBA stuff. Is there a clever way to split such denormalized sources while still obtaining the needed IDs to connect everything? -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] Howto "insert or update" ?
Hi, is there an elegant way to tell PG : a) Hey PG, look here are e.g. 3 values A, B, c for tableX b) please check if there is a row matching A and B as key in tableX c) if such a row exists, execute an UPDATE on column c else INSERT a new row. Currently I have a trigger function that should store a value in tableX whenever a certain column in tableY gets changed. I do it with: a) delete from tableX where key = ( A, B ) ( regardless if there is one ) b) insert into tableX This seems not very efficient though it works. Is there a better way? -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] need help with some aggregation magic
hi, I have a log-table that stores events of users and projects like this ( user_id integer, project_id integer, ts timestamp, event_type integer ) I need an aggregated list of worktime per user, per project, per day. The users can switch projects during the day so I can't work this out with min(ts) and max(ts). Is there a clever way to get this with SQL ? -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] need help with some aggregation magic
The log holds events and the ts is just the timestamp when the event occured. The events are kind of "opened form xxx with id xxx", "clicked button xxx", "switched to record xxx", ... They were primarily meant for helping me to find possible bugs when the user complains that it doesn't work but can't say what he did or where the error came up. The projects don't overlap per user. So I have time intervals with events for a project and I need to find the first and last event for every interval to add up the time difference and calculate the sum per day. Am 09.06.2011 16:16, schrieb Oliveiros d'Azevedo Cristina: The ts means the time the user started on a project ? Or the time he finished? Or can mean both? If so, how do you can tell one from the other? Different event_type s ? Is it correct to assume from your words that an user cannot be in more than one project at the time? If so, can't be overlapping, right? Best, Oliveiros - Original Message - From: "Andreas" To: Sent: Thursday, June 09, 2011 2:43 PM Subject: [SQL] need help with some aggregation magic hi, I have a log-table that stores events of users and projects like this ( user_id integer, project_id integer, ts timestamp, event_type integer ) I need an aggregated list of worktime per user, per project, per day. The users can switch projects during the day so I can't work this out with min(ts) and max(ts). Is there a clever way to get this with SQL ? -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] need help with some aggregation magic
Am 09.06.2011 18:20, schrieb Richard Broersma: On Thu, Jun 9, 2011 at 6:43 AM, Andreas wrote: I have a log-table that stores events of users and projects like this ( user_id integer, project_id integer, ts timestamp, event_type integer ) I need an aggregated list of worktime per user, per project, per day. The users can switch projects during the day so I can't work this out with min(ts) and max(ts). SELECT user_id, project_id, date_trunc( 'day', ts ) as event_day, MIN( ts ) AS event_start, MAX( ts ) AS event_end, MAX( ts ) - MIN( ts ) AS duration FROM Loggingtable GROUP BY user_id, project_id, date_trunc( 'day', ts ) ORDER BY date_trunc( 'day', ts ), user_id, project_id; As far as I understand you calculate the duration as the difference between the first and last event of a project per day. There is a problem because a user can work from 08.00 to 10.00 on project 1 and then from 10.00 to 12.00 on project 2 and then from 12.00 to 16.00 on project 1 again. Then I get project 1 8 hours plus project 2 2 hours though the user actually was just 8 hours there. -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] problem with selecting from a function
Hi, I've got a table with a couple of objects. Primary key object_id. There is a function that fetches some values from another table that relate to an object_id. Like fctX ( 7 ) --> set of typX ... ( 14, 'bla' ), ( 17, 'blu' ), ( 21, 'ble' ), ... The result of the function can have 0 or more lines of a defined result-type typX. Those resulting numbers are not object_ids. Now I'd need a SELECT that lists all function results of all object_ids. Like: ... 6, ... 7, 14, 'bla' 7, 17, 'blu' 7, 21, 'ble' 8, ... Actually it was enough to get just the numerical column of the function result. I tried select object_id, fctX (object_id) from objects; Then I get: 7, (14, 'bla') 7, (17, 'blu') 7, (21, 'ble') <--- round brackets This looks like an array but how can I split it up to columns or at least extract the number-column? -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] How to remove a set of characters in text-columns ?
Hi, how can I remove a set of characters in text-columns ? Say I'd like to remove { } ( ) ' " , ; . : ! Of course I can chain replace ( replace ( replace ( replace ( ... , '' ) and replace the chars one by one against an empty string ''. There might be a more elegant way. Is there ? regards -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] How to remove a set of characters in text-columns ?
Am 01.07.2011 04:17, schrieb Tim Landscheidt: besides the regexp_replace() solution mentioned by Charlie and Steve, you can also use TRANSLATE(): | tim=# SELECT TRANSLATE('a{b''c"d!f', '{}()''",;.:!', ''); nice, 2 solutions for 1 problem. :) my replace...replace... was a wee bit tedious ;) thanks a lot to you, Charlie and Steve :) regards Andreas -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] need magic to shuffle some numbers
Hi, there is a table that has among others a integer primary key "id" and another integer column "prio" as well as an integer "group_id". I'd like to invert the values of the prio-column for one of the groups. The prio numbers start with 3 and there are 1159 different prios in this group. At least every value appeares only once. :) Is there an elegant way to switch the prio values around so that every record with the first prio gehts the last and vice versa? Then the records with the second smallest prio get the second-to-last biggest value and v.v. ... regards -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] How can I inherit constraints?
Hi, how would I let a table inherit the constraints of its parent(s) ? It'd be rather cool not having to repeat every foreign key a table inherited from it's parents. -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] Clever way to check overlapping time intervals ?
Hi, is there a clever way to check overlapping time intervals ? An option named n should be taken from date y to y. The same name is ok for another interval. e.g. table : mytab ( d1 date, d2 date, n text, v text ) There should be a constraint to provide no row can have a d1 or d2 within the interval of another row in case they have the same n. And no row can have an interval that encloses an existing interval. -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] insert or update within transaction
Hi, http://www.postgresql.org/docs/current/static/sql-update.html has an example where an either an insert or update is done according if a key already exists. The example is about wines. I did it with numbers. drop table if exists tbl; create table tbl ( key int primary key, val int ); insert into tbl ( key, val ) values ( 1, 1 ), ( 2, 2 ), ( 3, 3 ), ( 8, 8 ); So the key 8 exists. Now I issue the commands according to the example in the docu: BEGIN; -- other operations SAVEPOINT sp1; INSERT INTO tbl VALUES( 8, 15 ); -- Assume the above fails because of a unique key violation, -- so now we issue these commands: ROLLBACK TO sp1; UPDATE tbl SET val = 15 WHERE key = 8; -- continue with other operations, and eventually COMMIT; Instead of the update the query fails with an double key value error for the primary key. Shouldn't the insert fail, get rolled back and then exercute an update instead successfully? Now if this actually worked would be nice but is there a more general statement that does an insert if the key doesn't exist or an update if it allready is there? As I understand if the example above worked, it rolled back the insert in any case and so it is actually equivalent to the update anyway. If the key 8 doesnt't exist the example does actually nothing to the table. -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] Use select and update together
Am 13.09.2011 07:50, schrieb pasman pasmański: In 8.4 this syntax is not implemented. select * from ( update tbl set val = 1 where key in ( 1, 2, 3, 4, 5 ) returning * ) as x wouldn't work even in PG 9.1. So what data structure is coming out of an "update ... returning *" statement? It obviously doesn't work like a subquery. -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] Howto build a funtion that selects an id or inserts a value
Hi, I'd like to have a function that looks up an id of an item. In case the item doesn't exist in the table yet it should be inserted and the new id should be returned. From the PG docu, I took the merge_db sample and modified it a bit. This works but I'm wondering if INSERT part could be tuned. Could I have something like i := INSERT INTO _log.computer ( item ) VALUES ( data ) returning id; so I dont have to query the sequence and create another roundtrip on the network? CREATE or replace FUNCTION find_or_insert_item ( data TEXT ) RETURNS integer AS $$ declare i integer; BEGIN LOOP -- first try to select the id of an item select id into i from items where item ilike data; IF found THEN RETURN i; END IF; -- not there, so try to insert the item and retrieve the new id. -- if someone else inserts the same item concurrently, -- we could get a unique-key failure BEGIN INSERT INTO items ( item ) VALUES ( data ); select currval('items_id_seq') into i; RETURN i; EXCEPTION WHEN unique_violation THEN -- Do nothing, and loop to try the SELECT again. END; END LOOP; END; $$ LANGUAGE plpgsql; -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] Better way to check more than 1 value NOT IN (...)
Hi, I've got to check if 2 values are not in a set that is dynamically calculated by a function. The query looks like select some_id, from . where 10 is not in ( select x from my_function (some_id)) and 20 is not in ( select x from my_function (some_id)) Is there a efficiency issue with calculating the set twice ? Can I somehow check both (or more) values at once? -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] how to calculate differences of timestamps?
How could I calculate differences of timestamps in a log-table? Table log ( user_id integer, login boolean, ts timestamp ) So login = true would be a login-event and login = false a logout. Is there a way to find the matching login/logout to calculate the difference? Or is there a better table "design" to do this? -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] Question on imports with foreign keys
Hi, suppose you need to import a csv with standard ciolums like name, adress, phone, ... and some additional text columns that need to be split off into referenced tables. Those lookup-tables will only be needed for a project with limited life time so I create a schema that might be called "project_x". There I create the necessary lookup tables. The core of the import will be added to the customers table with unlimited livespan. The customers table has a PKey id which is a serial. I don't want to add FKey columns into customers for the new lookup-tables so I create another table in project_x "projectinfos" that stores those FKeys and another FKey that references customers.id. First question: Is this a stupid aproach? If not: How is the easiest way to to find the customer.id of the new customers so I can insert the projectinfos? -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] Question on imports with foreign keys
Am 08.12.2011 09:39, schrieb Emre Hasegeli: On Thu, 08 Dec 2011 08:48:51 +0200, Andreas wrote: How is the easiest way to to find the customer.id of the new customers so I can insert the projectinfos? It is easy to select rows not related with another table. One of the following queries can be used. Select * from "customers" where id not in (select "customerId" from "projectinfos") I'm sorry I wasn't clear enough describing the scenario. Lets's say there were already 1000 records in the customers table. Now I add 357 new customers to this table. If I use one of your queries I'd get all 1357 entries of customers since "project_x.projectinfos" would be newly created for this project and therefor empty. I need to know which customers.id was created for which line in the temporary table that I read in with copy. When I have those ids I can fill "project_x.projectinfos" with just those new 357 customer.ids and foreign keys refering the new lookup-tables. -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] Is there a way to remove every character but numbers?
Hi, I need to clean up a text column so that it just contains numbers or the "+" sign if it is the first character of the string. Well, it is needed to normalise up a phone number column. So it'd be great if the "+" could be transformed in "00" in the same step. -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] need help with import
Hi I get CSV files to import. Th structure is like this. main part, sub part Could be like this A, a1 A, a2 A, a3 B, b1 B, b2 The database has a table for main_part and one for sub_part. The relation needs to be n:m so there is a relation table that holds ( main_id, sub_id ). The 2 primary keys main_part.id and sub_part.id are both serials. Is there a way to do an import with SQL? I can read the CSV into a temporary table and I can do a INSERT INTO main_part ( ... ) SELECT DISTINCT main columns FROM import; as well as a INSERT INTO sub_part ( ... ) SELECT sub columns FROM import; But how would I know what main_id and sub_id to insert into the n:m relation? At first when I do the import the relation is actually 1:n. -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] need help with import
Am 16.02.2012 02:13, schrieb David Johnston: -Original Message- From: pgsql-sql-ow...@postgresql.org [mailto:pgsql-sql-ow...@postgresql.org] On Behalf Of Andreas Sent: Wednesday, February 15, 2012 8:03 PM To: pgsql-sql@postgresql.org Subject: [SQL] need help with import Hi I get CSV files to import. Th structure is like this. main part, sub part Could be like this A, a1 A, a2 A, a3 B, b1 B, b2 The database has a table for main_part and one for sub_part. The relation needs to be n:m so there is a relation table that holds ( main_id, sub_id ). The 2 primary keys main_part.id and sub_part.id are both serials. Is there a way to do an import with SQL? I can read the CSV into a temporary table and I can do a INSERT INTO main_part ( ... ) SELECT DISTINCT main columns FROM import; as well as a INSERT INTO sub_part ( ... ) SELECT sub columns FROM import; But how would I know what main_id and sub_id to insert into the n:m relation? At first when I do the import the relation is actually 1:n. You will need to use the temporary table and perform multiple insert+select. I do not understand where you are confused. It would help to provide more meaningful sample data and/or the final result you are trying to achieve. Keep in mind any n:m setup requires three tables with the joining table usually having some descriptive meaning. Is time one of your components that you are not showing us? As you say there are 3 tables main_part ( id serial primary key, ... ) sub_part ( id serial primary key, ... ) main_to_sub ( main_id, sub_id ) I would read the csv into a temporary table "import" and insert the main columns into main_part (). Then there are new tuples in main_part() 42, A 43, B Now I insert the sub columns into sub_part() I'll get e.g. 1000, a1 1001, a2 1002, a3 1003, b1 1004, b2 To fill up main_to_sub ( ... ) I needed to know which sub_id belongs to which main_id. ( 42, 1000 ) ( 42, 1001 ) ( 42, 1002 ) ( 43, 1003 ) ( 43, 1004 ) I could compare every main-column in "import" to every related data-column in main_part to get the newly created main_id and do the same with every sub-data-column but this seems to be a wee bit tedious. Is there a more elegant way hat I don't see, yet? -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] How to split up phone numbers?
Hi, is there a way to split up phone numbers? I know that's a tricky topic and it depends on the national phone number format. I'm especially interested in a solution for Germany, Swizerland and Austria. I've got everything in a phone number column that makes hardly sense like: +49432156780 0049 4321 5678 0 04321/5678-0 and so on... Those 3 samples are actually the same number in different notations. Aim would be to get a normalized number split up in 4 seperate columns nr_nation nr_city nr_main nr_individual so I end up with 49 4321 5678 0 for central 49 4321 5678 42 for Mr. Smith Is this doable? It would be a start to at least split off nr_nation and nr_city. -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] syntax of joins
hi, is there a disadvantage to write a join as select * froma, b where a.id = b.a_id; over select * froma join b on a.id = b.a_id; -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] How to group by similarity?
Hi, I'm trying to get an idea about pg_trgrm. I created a GIST index on a text column in a table. Now I can filter the table with similarity(). How would I group the table so that it shows groups that have similarity () > x ? Lets say the table looks like this: id, txt 1, aa1 2, bb1 3, cc1 4, bb2 5, bb3 6, aa2 ... How would a select look like that shows: id, txt, group_id 1, aa1, 1, 6, aa2, 1, 2, bb1, 2, 4, bb2, 2, 5, bb3, 2, 3, cc1, 3 An extension of this problem would be to find similar records in 2 tables. As a result should apear a list of every record from table2 that is similar to a record of table1. Something like: table1.id, table2.id 1, 3 1, 5 1, 7 2, 2 2,11 -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] generic crosstab ?
Hi, is there a generic solution to dump the result of a query as a crosstab, when I can't know how many columns I will need? E.g. I get something like this: id, x 1, a 1, b 1, c 2, l 2, m and I'd like to see it as: id, x1, x2, x3, . xn 1, a, b, c,null, null 2, l,m, I fear the problem is I dont know n. -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] generic crosstab ?
Am 24.04.2012 22:08, schrieb Samuel Gendler: On Tue, Apr 24, 2012 at 1:01 PM, Andreas <mailto:maps...@gmx.net>> wrote: Hi, is there a generic solution to dump the result of a query as a crosstab, when I can't know how many columns I will need? E.g. I get something like this: id, x 1, a 1, b 1, c 2, l 2, m Yes. You can provide a query which returns the columns to the version of the crosstab function which looks like this: |crosstab(text source_sql, text category_sql)| It does exactly what you are looking for. The second query returns the set of values that act as columns in the final result (the pivot for each row in the result returned by the first query). This allows the function to correctly insert a null for any column for which there is no row in the first query results. I got stuck with an error that translates to "Materialisation mode is needed but is not allowed in this context." I couldn't figure out what this materialisation mode is, yet. Could you please have a look at my query sample? Both queries work for themselves but crosstab() fails. :( I checked and there are never more than 20 child_ids per parent_id so there should be enough columns. select crosstab ( $$ select parent_idas row_name, 'x' || row_number() over ( partition by parent_id order by child_id ) as category, child_id as value from children order by 1 $$, $$ select 'x' || generate_series(1, 20) as cat order by 1 $$ );
Re: [SQL] generic crosstab ?
Am 25.04.2012 00:04, schrieb Joe Conway: On 04/24/2012 02:42 PM, David Johnston wrote: You must specify the output record structure: SELECT crosstab(text, text) AS ( col1_name col1_type [, colN_name colN_type]* ) Whether this relates to the “materialization node” message you are receiving I have no idea. The error is because you are selecting from a set returning function in the target list rather than the from clause. It should be more like: SELECT * FROM crosstab(text, text) AS ( col1_name col1_type [, colN_name colN_type]* ) OK now i get at least some result. But 1) I need to know how many categories will apear to construct the target list. This is a wee bit of a problem as this number is actually dynamic. 2) There are some rows in the resulting list with empty columns within the row. When I execute the first query for a parent ID that has gaps in the crosstab I see it shows no gaps in the categories when called outside crosstab(). E.g. it dumps x1, x2, x3, x4, x5 when called seperately but crosstab() shows x1, x2, null, null, x5, null, x6, x7 How does this make sense ? Thanks for the answers so far :) select * from crosstab ( $$ select parent_idas row_name, 'x' || row_number() over ( partition by parent_id order by child_id ) as category, child_id as value from children order by 1 $$, $$ select 'x' || generate_series(1, 15) as category order by 1 $$ ) as result ( row_nameinteger, x1 integer, x2 integer, x3 integer, x4 integer, x5 integer, x6 integer, x7 integer, x8 integer, x9 integer, x10 integer, x11 integer, x12 integer, x13 integer, x14 integer, x15 integer ) -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] Need help in grouping records
Hi, I'm trying to fight against double entries in tables. I got as far as I can find similar records with trigram string matching. If I do this with a table compared to itself I get something like this: id_a, id_b 3, 5 3, 7 5, 3 5, 7 7, 3 7, 5 11, 13 13, 11 so the records with the IDs 3, 5, 7 and 11, 13 are similar enough to form a group. How would I get a list of record-IDs with a group-ID like this record_id, group_id 3, 1 5, 1 7, 1 11, 2 13, 2 Is there a way to get this by SQL ? -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] Need help in grouping records
Am 20.05.2012 05:04, schrieb Jasen Betts: On 2012-05-19, Andreas wrote: Hi, I'm trying to fight against double entries in tables. I got as far as I can find similar records with trigram string matching. If I do this with a table compared to itself I get something like this: id_a, id_b 3, 5 3, 7 5, 3 5, 7 7, 3 7, 5 11, 13 13, 11 so the records with the IDs 3, 5, 7 and 11, 13 are similar enough to form a group. How would I get a list of record-IDs with a group-ID like this record_id, group_id 3, 1 5, 1 7, 1 11, 2 13, 2 Is there a way to get this by SQL ? select id_a,min(least(id_a,id_b)) from SOMETHING group by id_a close enough? or this: ? select id_a, rank() over order by g from ( select id_a,min(least(id_a,id_b)) as g from SOMETHING group by id_a ) as foo Thanks :) Thats by far more elegant as my approach with arrays I figured out in the meantime. I changed rank() to dense_rank() in your solution. Functionally the 1st line does allready all the magic, though. Great :) -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] Select every first/last record of a partition?
Hi, suppose a table that has records with some ID and a timestamp. id,ts 3,2012/01/03 5,2012/01/05 7,2012/01/07 3,2012/02/03 3,2012/01/05 5,2012/03/01 7,2012/04/04 to fetch every last row of those IDs I do: select id, ts from ( select id, ts, row_number() over ( partition by id order by ts desc ) as nr from mytab ) as x where nr = 1 Is there a another way without a subselect? There might be more columns so the window-functions first/last won't help. -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] Is there a similarity-function that minds national charsets?
Hi, Is there a similarity-function that minds national charsets? Over here we've got some special cases that screw up the results on similarity(). Our characters: ä, ö, ü, ß could as well be written as: ae, oe, ue, ss e.g. select similarity ( 'Müller', 'Mueller' ) results to: 0.363636 In normal cases everything below 0.5 would be to far apart to be considered a match. As it is, I had to transfer the contents of the table into a temporary table where I translate every ambigous char to it's 2 char representation. Is there a solution so that detour is not necessary? -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] How to limit access only to certain records?
Hi, is there a way to limit access for some users only to certain records? e.g. there is a customer table and there are account-managers. Could I limit account-manager #1 so that he only can access customers only acording to a flag? Say I create a relation cu_am ( customer_id, account_manager_id ). Could I let the database control that account-manager #1 can only see customers who are assigned to him in the cu_am-relation? For now I do this in the front-end but this is easily circumvented for anyone who has a clue and uses some other client like psql. Regards Andreas -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] How to solve the old bool attributes vs pivoting issue?
Hi I do keep a table of objects ... let's say companies. I need to collect flags that express yes / no / don't know. TRUE / FALSE / NULL would do. Solution 1: I have a boolean column for every flag within the companies-table. Whenever I need an additional flag I'll add another column. This is simple to implement. On the other hand I'll have lots of attributes that are NULL. Solution 2: I create a table that holds the flag's names and another one that has 2 foreign keys ... let's call it "company_flags". company_flags references a company and an id in the flags table. This is a wee bit more effort to implement but I gain the flexibility to add any number of flags without having to change the table layout. There are drawbacks 1) 2 integers as keys would probaply need more space as a boolean column. On the other hand lots of boolean-NULL-columns would waste space, too. 2)Probaply I'll need a report of companies with all their flags. How would I build a view for this that shows all flags for any company? When I create this view I'would not know how many flags exist at execution time. This must be a common issue. Is there a common solution, too? -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] Prevent double entries ... no simple unique index
Hi, I've got a log-table that records events regarding other objects. Those events have a state that shows the progress of further work on this event. They can be open, accepted or rejected. I don't want to be able to insert addition events regarding an object X as long there is an open or accepted event. On the other hand as soon as the current event gets rejected a new event should be possible. So there may be several rejected events at any time but no more than 1 open or accepted entry. Can I do this within the DB so I don't have to trust the client app? The layout looks like this Table : objects ( id serial, ) Table : event_log ( id serial, oject_id integer references objects.id, state integer, date_created timestamp, ... ) where state is 0 = open, -1 = reject, 1 = accept I can't simply move rejected events in an archive table and keep a unique index on object_id as there are other descriptive tables that reference the event_log.id. -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] Prevent double entries ... no simple unique index
Am 12.07.2012 07:14, schrieb Andreas Kretschmer: Marc Mamin wrote: A partial index would do the same, but requires less space: create unique index on log(state) WHERE state IN (0,1); OK, nice :) What if I have those states in a 3rd table? So I can see a state-history of when a state got set by whom. objects ( id serial PK, ... ) events ( id serial PK, object_id integer FK on objects.id, ... ) event_states ( id serial PK, event_id integer FK on events.id, state integer ) There still should only be one event per object that has state 0 or 1. Though here I don't have the object-id within the event_states-table. Is it still possible to have a unique index that needs to span over a join of events and event_states? -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] join against a function-result fails
Hi, I have a table with user ids and names. Another table describes some rights of those users and still another one describes who inherits rights from who. A function all_rights ( user_id ) calculates all rights of a user recursively and gives back a table with all userright_ids this user directly has or inherits of other users as ( user_id, userright_id ). Now I'd like to find all users who have the right 42. select user_id, user_name fromusers join all_rights ( user_id ) using ( user_id ) where userright_id = 42; won't work because the parameter user_id for the function all_rights() is unknown when the function gets called. Is there a way to do this? -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] Need help with a special JOIN
Hi, asume I've got 2 tables objects ( id int, name text ) attributes ( object_id int, value int ) attributes has a default entry with object_id = 0 and some other where another value should be used. e.g. objects ( 1, 'A' ), ( 2, 'B' ), ( 3, 'C' ) attributes ( 0, 42 ), ( 2, 99 ) The result of the join should look like this: object_id, name, value 1, 'A', 42 2, 'B', 99 3, 'C', 42 I could figure something out with 2 JOINs, UNION and some DISTINCT ON but this would make my real query rather chunky. :( Is there an elegant way to get this? -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] rolling statistic probaply a window function?
I have a rather big log table that collects events for objects and the change of their states. Say an object can have state 0-9. Every now and then an event happens that gets logged as: ( event TIMESTAMP, object_id INTEGER, state_id INTEGER ) Now I need to get a history of the last 12 weeks that shows how many ojects of state 0-9 existed on Fridays 12:00 o clock. I guess it's a wee bit tricky as the friday number needs to monitor only the last change before this date. Probaply the last state change happened 20 weeks ago or there were 5 events within this week. In the latter case the object probaply switched states from 0 --> 5. It should only count as 1x state 5 within this week. Week 12 counts every object with its last state up to this week. Week 11 should show the change between week 12 to 11. Week 10 should show the change between week 11 to 10 and so on. E.g. in the 1st result for 12 weeks ago there were 10 x state 0 20 x state 1 30 x state 2 11 weeks ago there were 5 new objects with state 0 and there were 7 new state 1 while 3 objects changed from state 1 to 2 10 + 5 = 15 x state 0 20 + 7 - 3 = 24 x state 1 30 + 3 = 33 x state 2 All this for the last 12 fridays. Is there a way to do this? -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] need some magic with generate_series()
Hi I need a series of month numbers like 201212, 201301 MM to join other sources against it. I've got a table that describes projects: projects ( id INT, project TEXT, startdate DATE ) and some others that log events events( project_id INT, createdate DATE, ...) to show some statistics I have to count events and present it as a view with the project name and the month as MM starting with startdate of the projects. My problem is that there probaply arent any events in a month but I still need this line in the output. So somehow I need to have a select that generates: project 7,201211 project 7,201212 project 7,201301 It'd be utterly cool to get this for every project in the projects table with one select. Is there hope? -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] need some magic with generate_series()
Thanks Filip, with your help I came a step further. :) Could I do the folowing without using a function? CREATE OR REPLACE FUNCTION month_series ( date ) RETURNS table ( monthnr integer ) AS $BODY$ select to_char ( m, 'MM' )::integer fromgenerate_series ( $1, current_date, '1 month'::interval ) as m $BODY$ LANGUAGE sql STABLE; select project_id, month_series ( createdate ) fromprojects order by 1, 2; Am 22.01.2013 22:52, schrieb Filip Rembiałkowski: or even select m from generate_series( '20121101'::date, '20130101'::date, '1 month'::interval) m; On Tue, Jan 22, 2013 at 3:49 PM, jan zimmek wrote: hi andreas, this might give you an idea how to generate series of dates (or other datatypes): select g, (current_date + (g||' month')::interval)::date from generate_series(1,12) g; regards jan Am 22.01.2013 um 22:41 schrieb Andreas : Hi I need a series of month numbers like 201212, 201301 MM to join other sources against it. I've got a table that describes projects: projects ( id INT, project TEXT, startdate DATE ) and some others that log events events( project_id INT, createdate DATE, ...) to show some statistics I have to count events and present it as a view with the project name and the month as MM starting with startdate of the projects. My problem is that there probaply arent any events in a month but I still need this line in the output. So somehow I need to have a select that generates: project 7,201211 project 7,201212 project 7,201301 It'd be utterly cool to get this for every project in the projects table with one select. Is there hope? -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] need some magic with generate_series()
The query should work for all projects in the projects table where everyone has a seperate startdate for the series. For the join I need ( project_id, month_nr ). When I tried I couldn't figure out how to feed the startdate into Filip's expression without using the function to encapsulate the generate_series(). The folowing doesn't work: select project_id, (select to_char ( m, 'MM' )::integer fromgenerate_series ( projects.createdate, current_date, '1 month'::interval ) as m ) fromprojects order by 1, 2; Am 23.01.2013 01:08, schrieb Alexander Gataric: I would create a common table expression with the series from Filip and left join to the table you need to report on. - Reply message - From: "Andreas" To: "Filip Rembiałkowski" Cc: "jan zimmek" , Subject: [SQL] need some magic with generate_series() Date: Tue, Jan 22, 2013 4:49 pm Thanks Filip, with your help I came a step further. :) Could I do the folowing without using a function? CREATE OR REPLACE FUNCTION month_series ( date ) RETURNS table ( monthnr integer ) AS $BODY$ select to_char ( m, 'MM' )::integer fromgenerate_series ( $1, current_date, '1 month'::interval ) as m $BODY$ LANGUAGE sql STABLE; select project_id, month_series ( createdate ) fromprojects order by 1, 2; Am 22.01.2013 22:52, schrieb Filip Rembiałkowski: > or even > > select m from generate_series( '20121101'::date, '20130101'::date, '1 > month'::interval) m; > > > > On Tue, Jan 22, 2013 at 3:49 PM, jan zimmek wrote: >> hi andreas, >> >> this might give you an idea how to generate series of dates (or other datatypes): >> >> select g, (current_date + (g||' month')::interval)::date from generate_series(1,12) g; >> >> regards >> jan >> >> Am 22.01.2013 um 22:41 schrieb Andreas : >> >>> Hi >>> I need a series of month numbers like 201212, 201301 MM to join other sources against it. >>> >>> I've got a table that describes projects: >>> projects ( id INT, project TEXT, startdate DATE ) >>> >>> and some others that log events >>> events( project_id INT, createdate DATE, ...) >>> >>> to show some statistics I have to count events and present it as a view with the project name and the month as MM starting with startdate of the projects. >>> >>> My problem is that there probaply arent any events in a month but I still need this line in the output. >>> So somehow I need to have a select that generates: >>> >>> project 7,201211 >>> project 7,201212 >>> project 7,201301 >>> >>> It'd be utterly cool to get this for every project in the projects table with one select. >>> >>> Is there hope? >>> >>> >>> -- >>> Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) >>> To make changes to your subscription: >>> http://www.postgresql.org/mailpref/pgsql-sql >> >> >> -- >> Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) >> To make changes to your subscription: >> http://www.postgresql.org/mailpref/pgsql-sql -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] Re: [SQL] need some magic with generate_series()
I'm sorry to prove that daft. :( generate_series needs the startdate of every project to generate the specific list of monthnumbers for every project. To join against this the list needs to have a column with the project_id. So I get something like this but still I cant reference the columns of the projects within the query that generates the series. with projectstart ( project_id, startdate ) as ( select project_id, startdate fromprojects ) select project_id, m fromprojectstartas p left join ( select p.project_id, to_char ( m, 'MM' )::integer fromgenerate_series ( p.startdate, current_date, '1 month'::interval ) as m ) as x using ( project_id ); Am 23.01.2013 01:08, schrieb Alexander Gataric: I would create a common table expression with the series from Filip and left join to the table you need to report on. Sent from my smartphone - Reply message - From: "Andreas" To: "Filip Rembiałkowski" Cc: "jan zimmek" , Subject: [SQL] need some magic with generate_series() Date: Tue, Jan 22, 2013 4:49 pm Thanks Filip, with your help I came a step further. :) Could I do the folowing without using a function? CREATE OR REPLACE FUNCTION month_series ( date ) RETURNS table ( monthnr integer ) AS $BODY$ select to_char ( m, 'MM' )::integer fromgenerate_series ( $1, current_date, '1 month'::interval ) as m $BODY$ LANGUAGE sql STABLE; select project_id, month_series ( createdate ) fromprojects order by 1, 2; Am 22.01.2013 22:52, schrieb Filip Rembiałkowski: > or even > > select m from generate_series( '20121101'::date, '20130101'::date, '1 > month'::interval) m; > > > > On Tue, Jan 22, 2013 at 3:49 PM, jan zimmek wrote: >> hi andreas, >> >> this might give you an idea how to generate series of dates (or other datatypes): >> >> select g, (current_date + (g||' month')::interval)::date from generate_series(1,12) g; >> >> regards >> jan >> >> Am 22.01.2013 um 22:41 schrieb Andreas : >> >>> Hi >>> I need a series of month numbers like 201212, 201301 MM to join other sources against it. >>> >>> I've got a table that describes projects: >>> projects ( id INT, project TEXT, startdate DATE ) >>> >>> and some others that log events >>> events( project_id INT, createdate DATE, ...) >>> >>> to show some statistics I have to count events and present it as a view with the project name and the month as MM starting with startdate of the projects. >>> >>> My problem is that there probaply arent any events in a month but I still need this line in the output. >>> So somehow I need to have a select that generates: >>> >>> project 7,201211 >>> project 7,201212 >>> project 7,201301 >>> >>> It'd be utterly cool to get this for every project in the projects table with one select. >>> >>> Is there hope? >>> >>> >>> -- >>> Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) >>> To make changes to your subscription: >>> http://www.postgresql.org/mailpref/pgsql-sql >> >> >> -- >> Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) >> To make changes to your subscription: >> http://www.postgresql.org/mailpref/pgsql-sql -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] How to access multicolumn function results?
Hi I've got functions that return a TABLE. If I call it with constant parameters like: SELECT * FROM some_fct( 42 ); I get a table with column names as the result as intended. When I rather call it with the parameter coming from another table I get a set of records where I don't know how to access specific columns: SELECT some_fct( some_id ) FROM some_other_table; Now I get e.g. the result ( id1, value11, value12, value13 ), ( id1, value14, value15, value16 ), ( id2, value24, value25, value26 ), ... How can I split this up to look like a normal table or view with the column names that are defined in the RETURNS TABLE ( ... ) expression of the function. -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] How to reject overlapping timespans?
Hi, I need to store data that has a valid timespan with start and enddate. objects ( id, name, ... ) object_data ( object_id referencs objects(id), startdate, enddate, ... ) nothing special, yet How can I have PG reject a data record where the new start- or enddate lies between the start- or enddate of another record regarding the same object_id? Also it isn't allowed that such timespans lie within another. There can be times where no valid data exists but there can't be more than one valid data-record for the same point in time. -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] How to reject overlapping timespans?
Am 17.02.2013 19:20, schrieb Andreas Kretschmer: Andreas hat am 17. Februar 2013 um 18:02 geschrieben: I need to store data that has a valid timespan with start and enddate. objects ( id, name, ... ) object_data ( object_id referencs objects(id), startdate, enddate, ... ) nothing special, yet How can I have PG reject a data record where the new start- or enddate lies between the start- or enddate of another record regarding the same object_id? With 9.2 you can use DATERANGE and exclusion constraints test=# create table maps(id int, duration daterange, exclude using gist(id with =, duration with &&)); NOTICE: CREATE TABLE / EXCLUDE will create implicit index "maps_id_duration_excl" for table "maps" CREATE TABLE test=*# insert into maps values (1,'(2013-01-01,2013-01-10]'); INSERT 0 1 test=*# insert into maps values (1,'(2013-01-05,2013-01-15]'); ERROR: conflicting key value violates exclusion constraint "maps_id_duration_excl" DETAIL: Key (id, duration)=(1, [2013-01-06,2013-01-16)) conflicts with existing key (id, duration)=(1, [2013-01-02,2013-01-11)). test=*# though I still have a 9.1.x as productive server so I'm afraid I have to find another way. Thanks, Andreas :) -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] I need to fill up a sparse table in an view
Hi, I need to fill up a sparse table in an view. The table holds some numbers relating months and objects. The month is an integer in the format MM. To make it more convenient to manage this table I decidet to let a value be good till the next entry. E.g. if there is an entry in january and march, the january entry is good in february, too. So the table looks like. my_numbers ( object_id int, month int, some_nr int ) ( 17, 201301, 123 ), ( 42, 201301, 456 ), ( 42, 201303, 789 ), Now I need a view that fills the gaps up till the current month. ( 17, 201301, 123 ), ( 17, 201302, 123 ), <-- filled gap ( 17, 201303, 123 ), <-- filled gap ( 42, 201301, 456 ), ( 42, 201302, 456 ), <-- filled gap ( 42, 201303, 789 ), Is this possible? -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] How to split an array-column?
Hi, I've got a table to import from csv that has an array-column like: import ( id, array_col, ... ) Those arrays look like ( 42, ";4941;4931;4932", ... ) They can have 0 or any number of elements separated by ; So I'd need a result like this: 42, 4941 42, 4931 42, 4932 How would I get this? -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] How to split an array-column?
Thanks for the pointer. It got me half way. This is the solution: select distinct id, unnest ( string_to_array ( trim ( array_column, ';' ), ';' ) ) from import; Am 18.03.2013 20:24, schrieb Venky Kandaswamy: You can try select id, unnest(array_col) from table Venky Kandaswamy Principal Engineer, Adchemy Inc. 925-200-7124 From: pgsql-sql-ow...@postgresql.org [pgsql-sql-ow...@postgresql.org] on behalf of Andreas [maps...@gmx.net] Sent: Monday, March 18, 2013 12:13 PM To: pgsql-sql@postgresql.org Subject: [SQL] How to split an array-column? Hi, I've got a table to import from csv that has an array-column like: import ( id, array_col, ... ) Those arrays look like ( 42, ";4941;4931;4932", ... ) They can have 0 or any number of elements separated by ; So I'd need a result like this: 42, 4941 42, 4931 42, 4932 How would I get this? -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] monthly statistics
Hi, I need to show a moving statistic of states of objects for every month since beginning of 2013. There are tables like objects ( id integer, name text ); state ( id integer, state text ); 10=A, 20=B ... 60=F history ( object_id integer, state_id, ts timestamp ); Every event that changes the state of an object is recorded in the history table. I need to count the numbers of As, Bs, ... on the end of month. The subquery x finds the last state before a given date, here february 1st. select s.status, count(*) from ( select distinct on ( object_id ) status_id from history where ts < '2013/02/01' order by object_id, ts desc ) as x joinstatus as s on x.status_id = s.id group by s.status order by s.status; Now I need this for a series of months. This would give me the relevant dates. select generate_series ( '2013/02/01'::date, current_date + interval '1 month', interval '1 month' ) How could I combine those 2 queries so that the date in query 1 would be replaced dynamically with the result of the series? To make it utterly perfect the final query should show a crosstab with the states as columns. It is possible that in some months not every state exists so in this case the crosstab-cell should show a 0. Month AB C ... 2013/02/01 2013/03/01 ... -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] Funny date-sorting task
Hi, I've got a stack of tasks to show in a list. Every task has a timestamp X that may be NULL or a date. It contains the date when this tasks should be done. Sometimes it has date and the time-part, too. The list should be like this: 1) X sometime today should come first in ascending time order. 2) X in the past should show up after (1) in descending order so that not so long back dates come first 3) X = NULL 4) X sometime in the future The point is, I like to do the skeduled tasks for today as planned. = (1) Those allready lost appointments should not defer those today that are still in time but I like to get them after the today-tasks in an order where there is a chance that a nearer lost appointment might be still rescued even though it's a bit late. The dates longer back might be lost for good anyway so they can wait a bit longer. = (2) Provided I get through (1) and (2) I'd venture the unknown where there wasn't a date until now. = (3) Well, and future dates will be minded when their time is there. = (4) For now I do this by having a sorting-column in the tasks-table that gets updated in 4 steps where my application has to select every group (1) - (4) then sequentially walk through the recordset and update the sort-order-column by a counter. Later I sort ascending by the sort-order-column. It kind of works but I consider it ugly. Could you provide a clever solution? ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
[SQL] percentages of a column
Hi, There is a Select that calculates some SUMs of objects. I'd like to show a list of counts and percentages of this counts based on the sum of all counts. Is that possible in a SELECT statement? Example: Fruit Count % -- Bananas 5 10% Apples 15 30% Oranges 30 60% ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[SQL] List of FKeys ?
Hi, could I get a list of foreign keys that refer to a column? Say I have a table_1 (t1_id integer ..) and a lot of other tables that may refer to table 1. Is there a command that lists all tables that have a foreign key that points to t1_id? Preferably with the "on update/delete" options of the relation. Regards Andreas ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[SQL] design of tables for sparse data
Hi, I need some help to improve my design skills. :) I lately read an article about table design, that teached one shouldn't designe tables where it's clear that some columns aren't relevant for every row. It didn't span into the dirty usage details beyond the table design. E.g. a really simple example like a school that stores pupils like this: pupil (pupil_id, pupil_name, attends_english, attends_history, attends_maths, attends_football, attends_swimming) 1) Some pupils don't attend to football, swimming or both. 2) Occasionally there will be new classes added and others get dropped. Say in a year a column "attends_knitting" gets introduced. Now all those 50,000 existing rows get a column where the person hadn't even the occasion to apply. If for some reason the knitting class gets discontinued every row in the future will still get this column. So it was better to create 3 normalized tables: pupil (pupil_id, pupil_name, start_date, exit_date) classes (class_id, class_name, is_available, output_order) attends_to (pupil_id, class_id, in_year) as an n:m-relation Fine. Now I got rid off those empty columns in the pupil table. MY QUESTIONS: 1) How would I SELECT a report that looks like the first version of the pupil table out of the 3 table design? There must be a nontrivial SELECT statement that combines all 3 tables. E.g. I want the result: pupil_id, pupil_name, attends_to_english, ., attends_to_football, attends_to_swimming, attends_to_knitting (42, Frank Miller, yes, , no, yes, yes) (43, Suzy Smith, yes, ..., yes, yes, no) ... 2) Could I control the order in which those attends_to-columns appear by a numerical field output_order? 3) Could I restrict the classes list so that only those appear when there are pupils actually attending them in a given time frame? 3) a) Like "competitve knitting" was only available from 2000-2005. Now I'd produce a list of 2007 so there shouldn't appear an empty knitting-column. --> classes.is_availlable 3) b) Or it is availlable but no one has chosen it in 2007. --> attends_to.in_year Regards Andreas ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [SQL] design of tables for sparse data
Fernando Hevia schrieb: --- Andreas Wrote: --- ... MY QUESTIONS: Your questions have a strong "home-work" look. Yes but I didn't want to bother everyone with my project's details. It's more like a CRM. Up until now I just tried to manage somehow with the sql basics and now I like to get better. One interesting thing are crosstabs because I could use them in the reporting module. I used this schoolbook scenario because it's such an easy example. ;) 1) How would I SELECT a report that looks like the first version of the pupil table out of the 3 table design? There must be a nontrivial SELECT statement that combines all 3 tables. You should check out the JOIN clause in select statements. Simple example: Select t1.col1, t2.col1, t2.col2 >from t1 inner join t2 b on (t1.col1 = t2.col1) A simple JOIN won't do the trick. That would give me something like: (42, Frank Miller, Maths) (42, Frank Miller, English) (42, Frank Miller, Sports) (43, Suzy Smith, Maths) (43, Suzy Smith, History) But I want it turned around and a bit interpreted like: Column heads = (ID, Name, Maths, English, Sports, History) (42, Frank Miller, yes, yes, yes, no ) (43, Suzy Smith, yes, no, no, yes) Regards Andreas ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings