On Thu, Nov 9, 2017 at 9:58 AM, Igal @ Lucee.org <i...@lucee.org> wrote:
> On 11/8/2017 6:25 PM, Igal @ Lucee.org wrote:
>
>> On 11/8/2017 5:27 PM, Allan Kamau wrote:
>>
>>> Maybe using NUMERIC without explicitly stating the precision is
>>> recommen
literal
values can be written without the single quotes.
That didn't work. I CAST'ed the value in the SELECT to VARCHAR(16) but all
it did was change the error message to say that it expected `money` but
received `character varying`.
On 11/8/2017 4:52 PM, Allan Kamau wrote:
On Nov 9, 2017 03:46,
On Nov 9, 2017 03:46, "Tom Lane" wrote:
"Igal @ Lucee.org" writes:
> I have a column named "discount" of type money in SQL Server. I created
> the table in Postgres with the same name and type, since Postgres has a
> type named money, and am transferring the
Hi,
I am executing many "COPY" commands via psql serially from a bash script
from a compute node that accesses a mounted home directory located on a
remote server.
After about a thousand or so executions of the "COPY" command I get the
error "psql: could not get home directory to locate password
Thank you David.
-Allan.
On Mon, Jun 20, 2016 at 11:19 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:
> On Sun, Jun 19, 2016 at 5:09 PM, Allan Kamau <kamaual...@gmail.com> wrote:
>
>> I have an xml document from which I would like to extract the content
I have an xml document from which I would like to extract the contents of
several elements.
I would like to use xpath to extract the contents of "name" from the xml
document shown below.
WITH x AS
(
SELECT
'
http://uniprot.org/uniprot; xmlns:xsi="
http://www.w3.org/2001/XMLSchema-instance;
I would like to generate tsvectors on documents that contain chemistry
related text.
Is there a synonym dictionary for chemistry terms available?
-Allan
Provide a link to the source document where you found the link you have
posted.
Allan.
On Thu, May 12, 2016 at 4:48 PM, Daniel Westermann <
daniel.westerm...@dbi-services.com> wrote:
> just to let you know:
>
> This link is broken:
> http://www.postgresql.org/docs/9./static/release-9-6.html
>
>
On Wed, Aug 26, 2015 at 5:23 AM, rob stone floripa...@gmail.com wrote:
On Tue, 2015-08-25 at 20:17 -0400, Melvin Davidson wrote:
I think a lot of people here are missing the point. I was trying to
give examples of natural keys, but a lot of people are taking great
delight
in pointing out
To add onto what others a have said. I would use a bash script (and awk) to
prepared each record of the raw CSV file with a dataset name, name of the
file, timestamp and a serial number and place the newly generated data into
a new file. In this bash script, the value of the dataset name, name of
It seems you are fetching from a table then sequentially inserting each
record to another table.
In PostgreSQL, you could use cursors in PL/pgSQL (
http://www.postgresql.org/docs/9.4/interactive/plpgsql-cursors.html;).
Alternatively you may write a single query which selects from the table and
Dear Dmitriy,
To add on to David's suggestions, Data caching is a difficult task to
undertake. Consider an example where your data may not all fit into memory,
when you cache these data outside PostgreSQL you would need to look into
memory management as well as issues around concurrent population
On Sun, Aug 25, 2013 at 3:15 AM, Korisk kor...@yandex.ru wrote:
Hi!
I want quick insert into db a lot of data (in form of triplets). Data is
formed dynamical so COPY is not suitable.
I tried batch insert like this:
insert into triplets values (1,1,1);
insert into triplets values (1,1,1),
I do have a field for which I want to perform a join with some other field
in another table using case without case sensitivity perhaps using ~*.
One of this fields may contain + characters or unbalanced parenthesis. Is
there a way or some function that may mask out the regexp awareness of any
of
.
Regards,
Allan.
On 1/29/12, Tomas Vondra t...@fuzzy.cz wrote:
On 29.1.2012 07:30, Allan Kamau wrote:
Hi,
This is definitely off topic, I apologize.
We are planning to move our PostgreSQL installation from a shared
server to a dedicated server. I have been given the responsibility of
writing
Hi,
This is definitely off topic, I apologize.
We are planning to move our PostgreSQL installation from a shared
server to a dedicated server. I have been given the responsibility of
writing a migration policy document for this operation. This would be
my first such document to put together,
I am
Hello,
I do have a read-only table having a field having long varbit data (of
length 6000). And I have a function that performs various aggregate
bitAND and bitOR operations on this field and other fields of this
table. This function does not explicitly write any data to disk (here
is hardly any
On Thu, Nov 3, 2011 at 6:02 AM, Benjamin Smith li...@benjamindsmith.com wrote:
On Wednesday, November 02, 2011 11:39:25 AM Thomas Strunz wrote:
I have no idea what you do but just the fact that you bought ssds to
improve performance means it's rather high load and hence important.
Important
Hi,
I have a tab delimited file with over a thousand fields (columns)
which I would like to import into postgreSQL.
I have opted to import the entire record (line) of this file into a
single field in a table, one table record per file line. Later split
the contents of the field accordingly into
On Mon, Oct 24, 2011 at 11:29 PM, Raymond O'Donnell r...@iol.ie wrote:
On 24/10/2011 20:23, Allan Kamau wrote:
Hi,
I have a tab delimited file with over a thousand fields (columns)
which I would like to import into postgreSQL.
I have opted to import the entire record (line) of this file
On Tue, Aug 30, 2011 at 10:39 AM, Scott Marlowe scott.marl...@gmail.com wrote:
On Tue, Aug 30, 2011 at 12:54 AM, Florian Weimer fwei...@bfk.de wrote:
* Scott Marlowe:
On a machine with lots of memory, I've run into pathological behaviour
with both the RHEL 5 and Ubuntu 10.04 kernels where the
On Sat, Aug 6, 2011 at 11:02 AM, Fernando Pianegiani
fernando.pianegi...@gmail.com wrote:
Hello,
do you know any FREE hosting platforms where PostgreSQL, Java SDK, Tomcat
(or other web servers) can be already found installed or where they can be
installed from scratch? In possible, it would
On Tue, Jul 26, 2011 at 4:41 PM, Merlin Moncure mmonc...@gmail.com wrote:
http://codesynthesis.com/~boris/blog/2011/07/26/odb-1-5-0-released/
merlin
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
On Wed, Jun 22, 2011 at 1:35 PM, Durumdara durumd...@gmail.com wrote:
Hi!
I have 3 tables. I want to run a query that collect some data from
them, and join into one result table.
I show a little example, how to do this in another DB with script:
create temp table tmp_a as select id, name,
On Thu, Apr 7, 2011 at 5:50 AM, Craig Ringer
cr...@postnewspapers.com.au wrote:
On 06/04/11 18:51, tejas tank wrote:
What should I have to do.?
Read this:
http://wiki.postgresql.org/wiki/Guide_to_reporting_problems
and try again with a question that includes an example you can
On Mon, Mar 14, 2011 at 10:58 AM, Peter Evens pe...@bandit.be wrote:
hello,
i have a question about the PRIMARY KEY,
how can we let it start from for example 1000 instead of 1?
This is our program:
CREATE TABLE hy3_pack
(
hy3_id serial NOT NULL,
hy3_serie_nummer text NOT NULL,
On Sun, Mar 6, 2011 at 1:46 AM, John R Pierce pie...@hogranch.com wrote:
On 03/05/11 2:05 PM, Allan Kamau wrote:
Is it possible in theory to efficiently perform count the primary or
unique indices underlying data structures, regardless whether there is
a WHERE clause detailing filtration base
On Sat, Mar 5, 2011 at 8:02 PM, Raymond O'Donnell r...@iol.ie wrote:
On 03/03/2011 13:29, obamaba...@e1.ru wrote:
I use pgsql 9.0.3 and I know that postgresql tries to use the fields in
indexes instead of the original table if it possible
But when I run
SELECT COUNT(id) FROM tab
or
We are trying to determine the possible side effects of a rouge user account.
A web application requires a dedicated PostgreSQL database in which to
create tables and other database objects and manipulate data within
this single database. So I have created a database and made the
application's
On Sat, Feb 19, 2011 at 5:03 AM, andrew1 andr...@mytrashmail.com wrote:
hi all, I need to load mysql dump to postgre 8.3 . Is there a script to do
that? I hoped to use my2pg.pl , but couldn't find it.
It looks like it's not a part of /contrib in debian lenny.
what shoud I use for 8.3?
On Mon, Feb 14, 2011 at 10:38 AM, Alessandro Candini cand...@meeo.it wrote:
No, this database is on a single machine, but a very powerful one.
Processors with 16 cores each and ssd disks.
I already use partitioning and tablespaces for every instance of my db and I
gain a lot with my splitted
2011/2/10, Alessandro Candini cand...@meeo.it:
Here you are my probably uncommon situation.
I have installed 4 different instances of postgresql-9.0.2 on the same
machine, on ports 5433, 5434, 5435, 5436.
On these instances I have splitted a huge database, dividing it per date
(from 1995 to
On Mon, Dec 13, 2010 at 8:49 AM, savio rodriges sj_sa...@yahoo.com wrote:
Hello,
We are facing very HIGH memory utilization on postgreSQL server and need help.
Below are details of PostgreSQL server,
===
MemTotal:
I am searching for the resource that explains how to handle SQL
related exceptions in PL/PgSQL without letting the function's
execution terminate.
I would like to use his to address possible UNIQUE constraint
violation (and resulting exception) attributed to multiple clients
concurrently
Allan Kamau kamaual...@gmail.com:
I am searching for the resource that explains how to handle SQL
related exceptions in PL/PgSQL without letting the function's
execution terminate.
I would like to use his to address possible UNIQUE constraint
violation (and resulting exception) attributed
I would like to use copy to populate a single row in table with data
from a tsv file for further transformations.
I seem not find a way to stop copy from seeing that the tsv file does
indeed contain fields.
This my current query
COPY raw_data
(
On Tue, Dec 7, 2010 at 2:14 PM, Raymond O'Donnell r...@iol.ie wrote:
On 07/12/2010 11:07, Allan Kamau wrote:
I would like to use copy to populate a single row in table with data
from a tsv file for further transformations.
I seem not find a way to stop copy from seeing that the tsv file does
2010/12/3 Devrim GÜNDÜZ dev...@gunduz.org:
On Sun, 2010-11-21 at 12:39 +0300, Allan Kamau wrote:
I am unable to obtain (using yum) a version of pgAdmin3 that can
connect fruitfully to postgreSQL 9.x. My installation reports that the
version I do have 1.10.5 is the latest.
Should be fixed
2010/11/14 Devrim GÜNDÜZ dev...@gunduz.org:
I just released PostgreSQL 9.0 RPM for Red Hat Enterprise Linux 6 and
Fedora 14, on both x86 and x86_64.
Please note that 9.0 packages have a different layout as compared to
previous ones. You may want to read this blog post about this first:
On Thu, Nov 18, 2010 at 6:28 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Allan Kamau kamaual...@gmail.com writes:
CREATE TABLE farm.produce
(id INTEGER NOT NULL DEFAULT NEXTVAL('farm.produce_seq')
,process___id TEXT NOT NULL
,item_names tsvector NULL
,product__ids__tsquery tsquery NULL
I have two tables forming a parent-child relationship, in my case a
child entity could appear to have multiple parent records.
Given records of the parent table, I would like to perform a pairwise
determination of shared children between each such pair.
I am thinking of using full text search
On Wed, Nov 17, 2010 at 8:44 PM, Allan Kamau kamaual...@gmail.com wrote:
I have two tables forming a parent-child relationship, in my case a
child entity could appear to have multiple parent records.
Given records of the parent table, I would like to perform a pairwise
determination of shared
Hi,
I am experiencing the row is too big error (with postgreSQL-9.0.1)
when populating a table having a tsquery and tsvector fields.
Are fields of tsquery and tsvector datatypes affected by this row size
restriction?
Allan.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
On Thu, Nov 18, 2010 at 8:35 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Allan Kamau kamaual...@gmail.com writes:
I am experiencing the row is too big error (with postgreSQL-9.0.1)
when populating a table having a tsquery and tsvector fields.
Could we see the exact declaration of your table
On Sun, Nov 14, 2010 at 4:12 PM, Leif Biberg Kristensen
l...@solumslekt.org wrote:
On Sunday 14. November 2010 13.44.53 franrtorres77 wrote:
Hi there
I need to add periodically some data from a remote mysql database into our
postgresql database. So, does anyone know how to do it having in
On Thu, Nov 11, 2010 at 4:35 PM, Vick Khera vi...@khera.org wrote:
On Thu, Nov 11, 2010 at 1:42 AM, Allan Kamau kamaual...@gmail.com wrote:
After googling I found little resent content (including survival
statistics) of using SSDs in a write intensive database environment.
We use the Texas
Hi,
As part of datamining activity. I have some plpgsql functions
(executed in parallel, up to 6 such concurrent calls) that perform
some reads and writes of large number of (maybe 1) records at a
time to a table having multi-column primary key.
It seems the writing of these few thousands
On Tue, Nov 9, 2010 at 3:50 PM, Merlin Moncure mmonc...@gmail.com wrote:
On Mon, Nov 8, 2010 at 11:24 PM, Sandeep Srinivasa s...@clearsenses.com
wrote:
There was an interesting post today on highscalability
-
Hi,
I am looking for an easy to use job queueing system. Where a job is a
record in a table and several aggressive worker processes (from
multiple instances of a client application) each can take a single job
at a time. A job may only be collected only once.
The job queue will act like a stack
I am debugging a plpgsql function which contains a long sql query
consisting of several parameters which is executed using EXECUTE
command. I would like to output this command string including the
actual values of the parameters contained within it so I can obtain
the actual query and run it
On Thu, Oct 28, 2010 at 5:47 PM, Leif Biberg Kristensen
l...@solumslekt.org wrote:
On Thursday 28. October 2010 16.25.47 Allan Kamau wrote:
I am debugging a plpgsql function which contains a long sql query
consisting of several parameters which is executed using EXECUTE
command. I would like
On Thu, Oct 7, 2010 at 9:11 PM, Scott Marlowe scott.marl...@gmail.com wrote:
Hardware:
48 core AMD Magny Cours (4x12)
128G 1333MHz memory
34 15k6 drives, 2 hot spares, rest in RAID-1 pairs, 1 set for OS, 4
for pg_xlog, rest for /data/base
LSI RAID controller
OS:
Ubuntu 10.04
uname
On Tue, Sep 28, 2010 at 1:49 PM, Ivan Sergio Borgonovo
m...@webthatworks.it wrote:
I know I'm comparing apples and orange but still the difference in
performance was quite astonishing.
I've 2 tables that look like:
create table products(
id bigint
price double precision, /* legacy, don't
Hi,
I do have a table having a varbit field, on this varbit field I am
performing filteration using bit_and and bit_or operators. Is there an
index type that would be of use towards speeding up these operations
(bit_and, bit_or).
Allan.
--
Sent via pgsql-general mailing list
I am looking for a way to obtain the words that are common amongst two
tsvector records.
The long workaround I know is to:
1)convert the contents of the tsvector fields to text then find and
replace single quote followed by space then single quote with a comma
character then stripping away the
Greetings,
I have a table which I would like to (conditionally and efficiently)
populate if the would be new records do not already exist in that
table.
So far I see that I may use either a left join with a WHERE right
table key field is NULL. Or I could use a sub query and a NOT IN
clause.
On Mon, Aug 9, 2010 at 1:51 AM, Scott Frankel fran...@circlesfx.com wrote:
On Aug 8, 2010, at 2:45 AM, Torsten Zühlsdorff wrote:
Scott Frankel schrieb:
On Aug 6, 2010, at 6:13 AM, Torsten Zühlsdorff wrote:
John Gage schrieb:
On reflection, I think what is needed is a handbook that
Hi all,
I have a large table that contains redundancies as per one field.
I am looking for a way to identify (or extract) a non redundant set of
rows ( _any_ one record per group) from this table and for each record
of this distinct set of rows, I would like to capture it's other
fields.
Below
On Sat, Jul 24, 2010 at 10:38 AM, Scott Marlowe scott.marl...@gmail.com wrote:
On Sat, Jul 24, 2010 at 12:56 AM, Allan Kamau kamaual...@gmail.com wrote:
Hi all,
I have a large table that contains redundancies as per one field.
I am looking for a way to identify (or extract) a non redundant
On Thu, Jul 15, 2010 at 1:04 AM, Andrew Bartley ambart...@gmail.com wrote:
Thanks to all that replied,
I used Joe Conway's suggestion, using grep and an extracted list of tables,
functions and views form the DB. It worked very well.
I will attach the code I used to this thread once complete.
On Thu, Jun 24, 2010 at 11:11 PM, Joshua D. Drake j...@commandprompt.com
wrote:
On Thu, 2010-06-24 at 16:06 -0400, akp geek wrote:
It's not for the user postgres.. If I have created a testuser, can I
hide the code for that testuser?
No. Of course you have to wonder, why you would do that. It
On Fri, Jun 25, 2010 at 12:28 AM, Dave Page dp...@pgadmin.org wrote:
On Thu, Jun 24, 2010 at 10:20 PM, Joshua D. Drake j...@commandprompt.com
wrote:
On Thu, 2010-06-24 at 22:18 +0100, Dave Page wrote:
I have no problem with him trying to protect his hard earned work. I
just think he is
On Wed, Jun 16, 2010 at 1:15 PM, Craig Ringer
cr...@postnewspapers.com.au wrote:
On 16/06/10 16:56, Allan Kamau wrote:
The function I have mentioned above seems to hang indefinitely (now
12hours and counting) while it took only 2secs for the many times it
was previously invoked from
2010/6/16 Filip Rembiałkowski filip.rembialkow...@gmail.com:
2010/6/15 Allan Kamau kamaual...@gmail.com
I do have a PL/SQL function that gets executed called many times but
with different parameter values each of these times. For most
invocations of this function run in a couple of seconds
On Wed, Jun 16, 2010 at 1:15 PM, Craig Ringer
cr...@postnewspapers.com.au wrote:
On 16/06/10 16:56, Allan Kamau wrote:
The function I have mentioned above seems to hang indefinitely (now
12hours and counting) while it took only 2secs for the many times it
was previously invoked from
On Tue, Jun 15, 2010 at 5:03 PM, Allan Kamau kamaual...@gmail.com wrote:
I do have a PL/SQL function that gets executed called many times but
with different parameter values each of these times. For most
invocations of this function run in a couple of seconds however some
invocations
I do have a PL/SQL function that gets executed called many times but
with different parameter values each of these times. For most
invocations of this function run in a couple of seconds however some
invocations of the same function run (on the same dataset) for hours
with very little disk
On Wed, Apr 14, 2010 at 11:39 AM, Allan Kamau kamaual...@gmail.com wrote:
I have a brief question - I can provide more information if it is not clear.
I would like to perform pairwise intersect operations between several
pairs of sets (where a set is a list or vector of labels), I have many
On Wed, Apr 21, 2010 at 3:41 PM, Andre Lopes lopes80an...@gmail.com wrote:
Hi,
Thanks for the reply.
[quote]
The other way is to let the cron job spawn new processes (up to a
limited number of child proceses) as long as there are mails to send.
These child processes runs as long as there
I have a brief question - I can provide more information if it is not clear.
I would like to perform pairwise intersect operations between several
pairs of sets (where a set is a list or vector of labels), I have many
such pairs of sets and the counts of their elements may vary greatly.
Is there
I am performing some array membership operations ( namely @ or @ )
on large arrays.
One of the arrays in this pair of arrays being compared is contained
in a database field. The other array of this pair will be dynamically
generated from an array intersection activity in another part of the
On Fri, Mar 26, 2010 at 11:17 PM, David Kerr d...@mr-paradox.net wrote:
Howdy all,
I have some apps that are connecting to my DB via direct JDBC and I'd like to
pool their connections.
I've been looking at poolers for a while, and pgbouncer and pgpool-ii seem to
be some of the most
Hi,
A classic problem. I would like to assign integer values (from a
sequence) to records in a table based on the order (of contents) of
other field(s) in the same table.
I come across the solution before but I simply may not recall and a
search returns no clues. Please point me to a resource with
I would like to now if it is possible for PostgreSQL to make use of
multiple processes for a single query.
I am developing a data driven application which was a homemade plpgsql
function that identifies duplicate records(by single column) and
updates a boolean field (of these duplicates) as such.
When writing dynamic commands (those having EXECUTE 'some SQL
query';), is there a way to prevent interpretation of input parameters
as pieces of SQL commands? Does quote_literal() function implicitly
protect against this unwanted behaviour.
Allan.
--
Sent via pgsql-general mailing list
On Wed, Mar 17, 2010 at 11:41 AM, Craig Ringer
cr...@postnewspapers.com.au wrote:
Allan Kamau wrote:
When writing dynamic commands (those having EXECUTE 'some SQL
query';), is there a way to prevent interpretation of input parameters
as pieces of SQL commands?
EXECUTE ... USING
--
Craig
On Mon, Mar 8, 2010 at 10:16 AM, Scott Marlowe scott.marl...@gmail.com wrote:
On Sun, Mar 7, 2010 at 11:31 PM, Allan Kamau kamaual...@gmail.com wrote:
On Mon, Mar 8, 2010 at 5:49 AM, Scott Marlowe scott.marl...@gmail.com
wrote:
On Sun, Mar 7, 2010 at 1:45 AM, Allan Kamau kamaual...@gmail.com
Hi,
I am looking for an efficient and effective solution to eliminate
duplicates in a continuously updated cumulative transaction table
(no deletions are envisioned as all non-redundant records are
important). Below is my situation.
I do have a “central” transaction table (A) that is expected to
On Mon, Mar 8, 2010 at 5:49 AM, Scott Marlowe scott.marl...@gmail.com wrote:
On Sun, Mar 7, 2010 at 1:45 AM, Allan Kamau kamaual...@gmail.com wrote:
Hi,
I am looking for an efficient and effective solution to eliminate
duplicates in a continuously updated cumulative transaction table
On Mon, Feb 22, 2010 at 2:10 PM, beulah prasanthi itsbeu...@gmail.com wrote:
Helo
I am working on spring project with postgres 8.4
i wrote a function in postgrees which i am passing the argument email
email[] array
From front end we need to insesrt data into that emailarray .so i used
On Mon, Feb 15, 2010 at 9:23 PM, Andre Lopes lopes80an...@gmail.com wrote:
I have contacted again the support center on a2hosting.com and the answer
was that is no manual creation of triggers on PostgreSQL, bu the guy have
send to me a link with MySQL information about the subject,
Hi,
I have been written several psql functions, tiggers and defined
several tables over time for a database application I have been
developing. The application has evolved as I have gained better
understanding of the solution and so I have written newer psql
functions and other database objects
On Fri, Feb 12, 2010 at 3:47 PM, Richard Huxton d...@archonet.com wrote:
On 12/02/10 12:32, Allan Kamau wrote:
If I start with a clean deployment, is there a way I could perhaps
query the table(s) in pg_catalog for example to find out the database
objects (I have constructed) that have been
On Fri, Feb 12, 2010 at 9:13 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Richard Huxton d...@archonet.com writes:
On 12/02/10 15:10, Allan Kamau wrote:
Therefore I am looking for a solution that contains
last-accessed-time data for these objects, especially for the
functions and maybe the triggers
Hi,
I am running postgreSQL-8.4.2. I have a table that stores a single xml
document per row in one of it's fields. I would like to use xpath to
retrieve portions of these xml documents.
Is there a way to do so. (I am running postgreSQL 8.4.2 configured
(built) with --with-libxml and --with-libxslt
.
On Wed, Feb 10, 2010 at 11:06 AM, Allan Kamau kamaual...@gmail.com wrote:
Hi,
I am running postgreSQL-8.4.2. I have a table that stores a single xml
document per row in one of it's fields. I would like to use xpath to
retrieve portions of these xml documents.
Is there a way to do so. (I am
to access data from a varchar feild using an
integer...
Can you paste here your schema for that table?
P.
On Wed, Feb 10, 2010 at 11:06 AM, Allan Kamau kamaual...@gmail.com wrote:
Hi,
I am running postgreSQL-8.4.2. I have a table that stores a single xml
document per row in one of it's
On Wed, Feb 10, 2010 at 10:11 AM, Steve Atkins st...@blighty.com wrote:
On Feb 9, 2010, at 10:38 PM, AI Rumman wrote:
Thanks for your quick answes.
But if I use a file and then store the name in the database, is it possible
to use TEXT search tsvector and tsquery indexing on these external
On Sun, Jan 3, 2010 at 5:43 AM, David Fetter da...@fetter.org wrote:
Folks,
I'm working on some SQL intended to expose lock conditions (deadlock,
etc.), but to do this, I need to be able to create such conditions at
will. Rather than build an entire infrastructure from scratch, I'd
like to
On Sun, Jan 3, 2010 at 9:30 AM, Brian Modra epai...@googlemail.com wrote:
2010/1/3 Jamie Kahgee jamie.kah...@gmail.com:
I need a function like regexp_split_to_table where I can split a string to a
table by a space delimiter.
so:
Please Help Me
would convert to:
Please
Help
Me
However I'm
On Sun, Jan 3, 2010 at 9:37 AM, Allan Kamau kamaual...@gmail.com wrote:
On Sun, Jan 3, 2010 at 9:30 AM, Brian Modra epai...@googlemail.com wrote:
2010/1/3 Jamie Kahgee jamie.kah...@gmail.com:
I need a function like regexp_split_to_table where I can split a string to a
table by a space
On Sat, Dec 26, 2009 at 3:02 PM, Bill Moran wmo...@potentialtech.com wrote:
S Arvind arvindw...@gmail.com wrote:
Web application have single DB only..
I'm unsure what you mean by that and how it relates to my answer.
On Fri, Dec 25, 2009 at 7:03 PM, Bill Moran wmo...@potentialtech.comwrote:
20, 2009 at 10:06 PM, Allan Kamau kamaual...@gmail.com wrote:
On Mon, Dec 21, 2009 at 12:13 AM, Andre Lopes lopes80an...@gmail.com
wrote:
Hi,
I need to know if there is something like Oracle Forms in the Open
Source
world that works with PostgreSQL.
If do you know something, please
Hi all,
I have a simple question: Are the temporary database objects created
within a pgplsql function visible/available outside the function (or
implicit transaction) in any way. I am dropping and creating temporary
database objects within a function and I am calling this function in
from a
Hi,
I am looking for an efficient way to implement a sliding window view
of the data from a query.
I am developing a simple website and would like to provide for
viewing(fetching) only a predetermined maximum number of records per
page.
For example to view 100 records with 30 as the predetermined
Hi,
I did follow the basic advise and consulted the documentation for
SELECT and came across [ FETCH { FIRST | NEXT } [ count ] { ROW |
ROWS } ONLY] clause which seems to satisfy my requirement.
Allan.
On Tue, Dec 8, 2009 at 9:49 PM, Allan Kamau kamaual...@gmail.com wrote:
Hi,
I am looking
Hi all,
I would like to increase the database objects names limit from 64
characters to may be 128 characters to avoid name conflicts after
truncation of long table/sequence names.
I have seen a solution to this sometime back which includes (building
from source) modifying a header file then
On Fri, Nov 20, 2009 at 11:21 AM, A. Kretschmer
andreas.kretsch...@schollglas.com wrote:
In response to Allan Kamau :
Hi all,
I would like to increase the database objects names limit from 64
characters to may be 128 characters to avoid name conflicts after
truncation of long table/sequence
Could it be that a connection may be invoking a newly recreated
function rewritten in C that it had previously called before it's
recreation?
Allan.
On Mon, Oct 5, 2009 at 3:06 PM, Miklosi Attila amikl...@freemail.hu wrote:
What does the message below mean?
server closed the connection
Hi,
I do have a query which make use of the results of an aggregate
function (for example bit_or) several times in the output column list
of the SELECT clause, does PostgreSQL simply execute the aggregate
function only once and provide the output to the other calls to the
same aggregate function.
1 - 100 of 125 matches
Mail list logo