On 14.11.2013 02:26, Jeff Janes wrote:
On Wed, Nov 13, 2013 at 3:53 PM, Sergey Burladyan eshkin...@gmail.comwrote:
Jeff Janes jeff.ja...@gmail.com writes:
If I not mistaken, looks like lazy_scan_heap() called from
lazy_vacuum_rel()
(see [1]) skip pages, even if it run with scan_all == true,
Hi
For decades, this type of problem has been the meat and vegetables of
discussions about SQL programming and design.
One writer on this subject has stood out, thanks to his mental clarity
and ability to set out complicated concepts in a readily comprehensible
manner.
His name is Joe
On 18/11/2013 02:16, Hengky Liwandouw wrote:
Dear Friends,
Please help for the select command, as i had tried many times and
always can not display the result as what i want.
I am looking for the solution on google but still can not found the
right answer to solve the problem.
I have 3
If the tables aren't huge, you're not concerned about optimization, and you
just want to get your numbers, I think something like this would do the
trick. I haven't actually tried it 'cause I didn't have easy access to
your tables:
SELECT
a.product_id,
a.product_name,
b.initial_stock_sum,
Thanks all for your concern and help.
I have tried David suggestion and it works. As what you all said, there are so
many important feature in PostgreSQL. I really have to spend time to study it.
Last time i use Windev to develop front end application, and HyperfileSQL as
the database server.
Thanks a lot Ken,
I will try it soon.
But when the table becomes huge (how big 'huge' in postgres ?), how to
optimize such command ?
I have index on all important field like date, productid, supplierid,
customerid and so on
Optimization is really an important thing as i plan to keep all
In general, when I have to handle Ledger type data (which this problem
is), I tend to hold data in 3 tables
1. Master Ledger ( Product ID, Name, etc)
2. Master Ledger Balances(Product ID, Fiscal_Year, Opening Balance,
Net_Transaction_P1, Net_Transaction_P2, ... etc)
3. Master Ledger
oka wrote:
I have a question.
There are the following data.
create table chartbl
(
caseno int,
varchar5 varchar(5)
);
insert into chartbl values(1, ' ');
insert into chartbl values(2, '');
The same result with the following two queries is obtained.
select * from chartbl where
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hello
We got this error in our logs yesterday.
2013-11-17 19:01:02.278 CET,,,29356,,52565a6f.72ac,1,,2013-10-10
09:42:39 CEST,,0,LOG,0,could not truncate directory
pg_subtrans: apparent wraparound,
2013-11-17 19:06:02.027
On 11/17/2013 11:48 PM, David Johnston wrote:
I am guessing that it is the need for the index to point to new versions of
the physical record that the index is churning so much and causing this kind
of bloat?
Bingo.
I am preparing to REINDEX the unique index and DROP the non-unique one over
Hi,
Have some issues to built the pg_trgm module from source.
For first the regexport.h file was missing in /usr/include, so I got it.
Now I still need the the regexport.c file and probably also the other one
You can see the files in this link:
On 11/18/2013 07:34 AM, Janek Sendrowski wrote:
Hi,
Have some issues to built the pg_trgm module from source.
For first the regexport.h file was missing in /usr/include, so I got it.
Now I still need the the regexport.c file and probably also the other one
You can see the files in this link:
Adrian Klaver adrian.kla...@gmail.com writes:
On 11/18/2013 07:34 AM, Janek Sendrowski wrote:
Have some issues to built the pg_trgm module from source.
FYI I find those files in the source I downloaded from the Postgres site:
Sounds like Janek is trying to build 9.3 pg_trgm against a pre-9.3
Hi,
Just wondering what kind of execute statement (within a function) i should
use to force the planner to use the index for the following?:
SELECT pcode searchmatch, geometry FROM postcode
WHERE (replace(lower(pcode), ' '::text, ''::text)) LIKE
(replace((lower($1)::text),'
Hi,
My current version is 9.2. I could just update it.
I got the pg_trgm from here:
http://git.postgresql.org/gitweb/?p=postgresql.git;a=tree;f=contrib/pg_trgm;hb=refs/heads/master
And the regex files from here:
On Sun, Nov 17, 2013 at 1:33 PM, Stefan Keller sfkel...@gmail.com wrote:
Hi Edson,
On 2013/11/17 Edson Richter edsonrich...@hotmail.com you wrote:
One question: would you please expand your answer and explain how would
this adversely affect async replication?
Is this a question or a hint
On 2013-11-18 04:37, Ken Tanzer wrote:
If the tables aren't huge, you're not concerned about optimization,
and you just want to get your numbers, I think something like this
would do the trick. I haven't actually tried it 'cause I didn't have
easy access to your tables:
SELECT
On Sun, Nov 17, 2013 at 11:12 PM, David Johnston pol...@yahoo.com wrote:
Having recently had a pg_dump error out due to not having enough disk it
occurs to me that it would be nice for pg_dump to remove the partial dump
file it was creating (if possible/known) instead of having it sit around
On Sun, Nov 17, 2013 at 4:02 PM, Stefan Keller sfkel...@gmail.com wrote:
2013/11/18 Andreas Brandl m...@3.141592654.de wrote:
What is your use-case?
It's geospatial data from OpenStreetMap stored in a schema optimized for
PostGIS extension (produced by osm2pgsql).
BTW: Having said (to
On 11/18/2013 08:32 AM, Janek Sendrowski wrote:
Hi,
My current version is 9.2. I could just update it.
I got the pg_trgm from here:
http://git.postgresql.org/gitweb/?p=postgresql.git;a=tree;f=contrib/pg_trgm;hb=refs/heads/master
And the regex files from here:
Janek Sendrowski jane...@web.de wrote:
My current version is 9.2. I could just update it.
I got the pg_trgm from here:
http://git.postgresql.org/gitweb/?p=postgresql.git;a=tree;f=contrib/pg_trgm;hb=refs/heads/master
Get a production release version from the snapshot tarball
downloads or use
Hi Jeff and Martin
On 18. November 2013 17:44 Jeff Janes jeff.ja...@gmail.com wrote:
I rather doubt that. All the bottlenecks I know about for well cached
read-only workloads are around
locking for in-memory concurrency protection, and have little or nothing
to do with secondary storage.
On Tue, Nov 19, 2013 at 02:39:17AM +0100, Stefan Keller wrote:
Referring to the application is something you can always say - but
shouldn't prevent on enhancing Postgres.
With respect, that sounds like a sideways version of, You should
optimise for $usecase. You could be right, but I think the
On Sun, Nov 17, 2013 at 10:48 PM, David Johnston pol...@yahoo.com wrote:
I am preparing to REINDEX the unique index and DROP the non-unique one over
the same field - probably Tuesday evening. Does everything I am saying here
sound kosher or would someone like me to provide additional
We'd like to seek out your expertise on postgresql regarding this error that
we're getting in an analytical database.
Some specs:
proc: Intel Xeon X5650 @ 2.67Ghz dual procs 6-core, hyperthreading on.
memory: 48GB
OS: Oracle Enterprise Linux 6.3postgresql version: 9.1.9
shared_buffers: 18GB
On Mon, Nov 18, 2013 at 12:40 PM, Brian Wong bwon...@hotmail.com wrote:
We'd like to seek out your expertise on postgresql regarding this error
that we're getting in an analytical database.
Some specs:
proc: Intel Xeon X5650 @ 2.67Ghz dual procs 6-core, hyperthreading on.
memory: 48GB
OS:
Jeff Janes wrote
On Sun, Nov 17, 2013 at 11:12 PM, David Johnston lt;
polobo@
gt; wrote:
Having recently had a pg_dump error out due to not having enough disk it
occurs to me that it would be nice for pg_dump to remove the partial dump
file it was creating (if possible/known) instead of
David Johnston wrote
Jeff Janes wrote
On Sun, Nov 17, 2013 at 11:12 PM, David Johnston lt;
polobo@
gt; wrote:
Having recently had a pg_dump error out due to not having enough disk it
occurs to me that it would be nice for pg_dump to remove the partial
dump
file it was creating (if
On Mon, Nov 18, 2013 at 8:30 PM, Brian Wong bwon...@hotmail.com wrote:
I've tried any work_mem value from 1gb all the way up to 40gb, with no
effect on the error. I'd like to think of this problem as a server process
memory (not the server's buffers) or client process memory issue, primarily
Hi All,
I have added one column with xml type ,after adding I am getting following
error.
org.postgresql.util.PSQLException: ERROR: could not identify an equality
operator for type xml
If I have removed column following query works fine
select * from (select * from KM_COURSE_MAST where ID in
gajendra s v wrote
Hi All,
I have added one column with xml type ,after adding I am getting following
error.
org.postgresql.util.PSQLException: ERROR: could not identify an equality
operator for type xml
If I have removed column following query works fine
select * from (select *
Hi Andrew
You wrote:
And indeed, given the specifics of the use
case you're outlining, it's as much a demonstration of that evaluation
as a repudiation of it.
Maybe my use cases seem to be a special case (to me and over a million
users of OpenStreetMap it's not).
Anyhow: That's why I'm
32 matches
Mail list logo