I have an `account` table with 5.3M rows, with primary key `id` of
type `text` (and 600+ columns if that matters).
I'm trying to create a `newaccount` table with the same schema but
600k newly imported rows, then insert all the old rows for which `id`
isn't already in the set of newly updated
Guessing the answer's no, but is there any way to construct indexes
such that I can safely put them on (faster) volatile storage? (Just to
be clear, I'm asking about indexes for *logged* tables.)
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your
:
On 3 May 2013 21:06, Yang Zhang yanghates...@gmail.com wrote:
Guessing the answer's no, but is there any way to construct indexes
such that I can safely put them on (faster) volatile storage? (Just to
be clear, I'm asking about indexes for *logged* tables.)
Yes:
CREATE INDEX ... TABLESPACE
On Tue, Apr 30, 2013 at 8:13 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Ian Lawrence Barwick barw...@gmail.com writes:
2013/5/1 Yang Zhang yanghates...@gmail.com:
That is unfortunate. Good thing I asked, I guess. Do you have a
pointer to said blog post?
I think this is the post in question
On Tue, Apr 30, 2013 at 11:14 PM, Yang Zhang yanghates...@gmail.com wrote:
On Tue, Apr 30, 2013 at 8:13 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Ian Lawrence Barwick barw...@gmail.com writes:
I think this is the post in question:
http://thebuild.com/blog/2013/03/10/you-cannot-recover-from
On Wed, May 1, 2013 at 4:56 PM, Jeff Janes jeff.ja...@gmail.com wrote:
That brings up another point to consider. If wal level is minimal, then
tables which you bulk load in the same transaction as you created them or
truncated them will not get any WAL records written. (That is the main
I would intuit that it's fine, but I just want to make sure there are
no gotchas from a recovery point of view:
If I were to lose my temp tablespace upon system crash, would this
prevent proper crash recovery?
Also, if I were to omit the temp tablespace from the base backup,
would that prevent
That is unfortunate. Good thing I asked, I guess. Do you have a
pointer to said blog post?
On Tue, Apr 30, 2013 at 5:31 PM, Darren Duncan dar...@darrenduncan.net wrote:
On 2013.04.30 4:55 PM, Yang Zhang wrote:
I would intuit that it's fine, but I just want to make sure there are
no gotchas
On Tue, Apr 30, 2013 at 7:14 PM, Ian Lawrence Barwick barw...@gmail.com wrote:
2013/5/1 Yang Zhang yanghates...@gmail.com:
On Tue, Apr 30, 2013 at 5:31 PM, Darren Duncan dar...@darrenduncan.net
wrote:
On 2013.04.30 4:55 PM, Yang Zhang wrote:
I would intuit that it's fine, but I just want
On Tue, Apr 30, 2013 at 8:13 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Ian Lawrence Barwick barw...@gmail.com writes:
2013/5/1 Yang Zhang yanghates...@gmail.com:
That is unfortunate. Good thing I asked, I guess. Do you have a
pointer to said blog post?
I think this is the post in question
On Fri, Apr 26, 2013 at 9:41 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Yang Zhang yanghates...@gmail.com writes:
It currently takes up to 24h for us to run a large set of UPDATE
statements on a database, which are of the form:
UPDATE table SET field1 = constant1, field2 = constant2
On Sat, Apr 27, 2013 at 12:24 AM, Yang Zhang yanghates...@gmail.com wrote:
On Fri, Apr 26, 2013 at 9:41 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Yang Zhang yanghates...@gmail.com writes:
It currently takes up to 24h for us to run a large set of UPDATE
statements on a database, which
this alternative be faster?
Kind regards,
Misa
On Saturday, April 27, 2013, Yang Zhang wrote:
On Fri, Apr 26, 2013 at 9:41 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Yang Zhang yanghates...@gmail.com writes:
It currently takes up to 24h for us to run a large set of UPDATE
statements
On Sat, Apr 27, 2013 at 2:54 AM, Yang Zhang yanghates...@gmail.com wrote:
On Sat, Apr 27, 2013 at 1:55 AM, Misa Simic misa.si...@gmail.com wrote:
Hi,
If dataset for update is large...
Maybe best would be:
From client machine, instead of sending update statements with data - export
data
* FROM tmp WHERE id NOT IN (SELECT id FROM bar);
The question I have remaining is whether the bulk UPDATE will be able
to update many rows efficiently (smartly order them to do largely
sequential scans) - if so, I imagine it would be faster than the
above.
On Saturday, April 27, 2013, Yang Zhang
On Sat, Apr 27, 2013 at 11:55 AM, Jeff Janes jeff.ja...@gmail.com wrote:
On Sat, Apr 27, 2013 at 10:40 AM, Yang Zhang yanghates...@gmail.com wrote:
My question really boils down to: if we're interested in using COW
snapshotting (a common feature of modern filesystems and hosting
environments
It currently takes up to 24h for us to run a large set of UPDATE
statements on a database, which are of the form:
UPDATE table SET field1 = constant1, field2 = constant2, ... WHERE
id = constid
(We're just overwriting fields of objects identified by ID.)
The tables have handfuls of
suggest this won't get us enough of a
boost as much as possibly refactoring the way we're executing these
bulk updates in our application.)
On Fri, Apr 26, 2013 at 5:27 PM, Gavin Flower
gavinflo...@archidevsys.co.nz wrote:
On 27/04/13 12:14, Yang Zhang wrote:
It currently takes up to 24h for us
We're running on EBS volumes on EC2. We're interested in leveraging
EBS snapshotting for backups. However, does this mean we'd need to
ensure our pg_xlog is on the same EBS volume as our data?
(I believe) the usual reasoning for separating pg_xlog onto a separate
volume is for performance.
appreciated.
On 27/04/13 13:35, Yang Zhang wrote:
We're using Postgresql 9.1.9 on Ubuntu 12.04 on EBS volumes on
m1.xlarge instances, which have:
15 GiB memory
8 EC2 Compute Units (4 virtual cores with 2 EC2 Compute Units each)
64-bit platform
(Yes, we're moving to EBS Optimized instances
Any hints with this question I had posted to SO?
http://stackoverflow.com/questions/15965785/why-is-postgresql-9-1-not-using-index-for-simple-equality-select
Pasted here as well. Thanks.
My table `lead` has an index:
\d lead
...
Indexes:
lead_pkey PRIMARY KEY, btree (id)
---
Seq Scan on lead (cost=100.00..1319666.99 rows=1 width=5208)
Filter: (email = 'f...@blah.com'::text)
(2 rows)
On Fri, Apr 12, 2013 at 1:13 AM, John R Pierce pie...@hogranch.com wrote:
On 4/12/2013 1:03 AM, Yang Zhang wrote:
db= explain
It's actually just `text`.
I updated my SO question with some more info including explain analyze
(no difference), \d,
and your last incantation.
Thanks!
On Fri, Apr 12, 2013 at 7:11 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Yang Zhang yanghates...@gmail.com writes:
db= explain select * from
Apologies for that Tom. I will paste the information in line once I'm
back at my computer. I do appreciate your help.
On Fri, Apr 12, 2013 at 10:24 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Yang Zhang yanghates...@gmail.com writes:
I updated my SO question with some more info including explain
on? Hesitant to kill the query in case it's almost done,
though I doubt it (didn't have the foresight to expose this
information in the client process - wasn't expecting to run into
this). Thanks in advance.
--
Yang Zhang
http://yz.mit.edu/
--
Sent via pgsql-general mailing list (pgsql-general
)
and timestamp between '2009-04-01'::date and '2010-04-01'::date;
, ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY)
st.setFetchSize(8192)
rs = st.executeQuery()
On Thu, Sep 15, 2011 at 12:53 AM, Yang Zhang yanghates...@gmail.com wrote:
I have a simple query that's been running
Should also add that while the client is under no load (CPU1%, load
~0.1, mem ~20%), the server looks pretty busy (CPU 90-100% of 1 core,
load ~1.5, mem ~70-80%), but PG is the only thing using resources.
On Thu, Sep 15, 2011 at 12:58 AM, Yang Zhang yanghates...@gmail.com wrote:
If it matters
On Thu, Sep 15, 2011 at 1:22 AM, Tomas Vondra t...@fuzzy.cz wrote:
On 15 Září 2011, 9:53, Yang Zhang wrote:
I have a simple query that's been running for a while, which is fine,
but it seems to be running very slowly, which is a problem:
mydb=# explain select user_id from den where user_id
PG tends to be picked on by the Linux OOM killer, so lately we've been
forcing the OOM killer to kill other processes first with this script:
while true; do
for i in `pgrep postgres`; do
echo -17 /proc/$i/oom_adj
done
sleep 60
done
Is there a Better Way? Thanks in advance.
--
Sent
order by bytes desc
)
$query;
}
--
Yang Zhang
http://yz.mit.edu/
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Is there any effort to add compression into PG, a la MySQL's
row_format=compressed or HBase's LZO block compression?
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On Thu, Apr 14, 2011 at 5:07 PM, Adrian Klaver adrian.kla...@gmail.com wrote:
On Thursday, April 14, 2011 4:50:44 pm Craig Ringer wrote:
On 15/04/2011 7:01 AM, Yang Zhang wrote:
Is there any effort to add compression into PG, a la MySQL's
row_format=compressed or HBase's LZO block
On Thu, Apr 14, 2011 at 7:42 PM, Adrian Klaver adrian.kla...@gmail.com wrote:
On Thursday, April 14, 2011 5:51:21 pm Yang Zhang wrote:
adrian.kla...@gmail.com
Already know about TOAST. I could've been clearer, but that's not the
same as the block-/page-level compression I
On Thu, Apr 14, 2011 at 6:46 PM, mark dvlh...@gmail.com wrote:
-Original Message-
From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
ow...@postgresql.org] On Behalf Of Yang Zhang
Sent: Thursday, April 14, 2011 6:51 PM
To: Adrian Klaver
Cc: pgsql-general@postgresql.org
I'm using the postgresql 8.4.7 in Ubuntu 10.04, and I'm trying to use
.pgpass documented here:
http://www.postgresql.org/docs/8.4/interactive/libpq-pgpass.html
I have a ~/.pgpass with 600 perms containing:
myhostname.com:yang:yang:mypassword
However, it doesn't seem to get picked up by psql -h
Dah, left out the port.
On Fri, Apr 8, 2011 at 10:36 PM, Yang Zhang yanghates...@gmail.com wrote:
I'm using the postgresql 8.4.7 in Ubuntu 10.04, and I'm trying to use
.pgpass documented here:
http://www.postgresql.org/docs/8.4/interactive/libpq-pgpass.html
I have a ~/.pgpass with 600 perms
Last I could find on this, it was slated for 8.1 inclusion:
http://archives.postgresql.org/pgsql-patches/2004-08/msg00425.php
But it doesn't seem to be there in my PG8.4
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
Also, in PG8.4+, is there any way to set the default tablespace on a
per-schema basis?
On Thu, Apr 7, 2011 at 12:27 PM, Yang Zhang yanghates...@gmail.com wrote:
Last I could find on this, it was slated for 8.1 inclusion:
http://archives.postgresql.org/pgsql-patches/2004-08/msg00425.php
Anyone know how to set your password without having it visible in the
terminal, psql history, etc. (i.e. getpass(3))? Hoping there's a
program that already does this. Thanks.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
On Wed, Apr 6, 2011 at 2:15 PM, Yang Zhang yanghates...@gmail.com wrote:
On Wed, Apr 6, 2011 at 2:09 PM, Adrian Klaver adrian.kla...@gmail.com wrote:
On Wednesday, April 06, 2011 2:00:53 pm Yang Zhang wrote:
Anyone know how to set your password without having it visible in the
terminal, psql
On Wed, Apr 6, 2011 at 2:09 PM, Adrian Klaver adrian.kla...@gmail.com wrote:
On Wednesday, April 06, 2011 2:00:53 pm Yang Zhang wrote:
Anyone know how to set your password without having it visible in the
terminal, psql history, etc. (i.e. getpass(3))? Hoping there's a
program that already
How do I prevent accidental non-SSL connections (at least to specific
hosts) when connecting via psql? Is there any configuration for this?
Thanks.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
On Wed, Apr 6, 2011 at 4:22 PM, Adrian Klaver adrian.kla...@gmail.com wrote:
On Wednesday, April 06, 2011 4:06:40 pm Yang Zhang wrote:
How do I prevent accidental non-SSL connections (at least to specific
hosts) when connecting via psql? Is there any configuration for this?
Thanks.
http
On Wed, Apr 6, 2011 at 4:53 PM, Adrian Klaver adrian.kla...@gmail.com wrote:
On Wednesday, April 06, 2011 4:24:30 pm Yang Zhang wrote:
On Wed, Apr 6, 2011 at 4:22 PM, Adrian Klaver adrian.kla...@gmail.com
wrote:
On Wednesday, April 06, 2011 4:06:40 pm Yang Zhang wrote:
How do I prevent
On Wed, Apr 6, 2011 at 4:57 PM, Scott Marlowe scott.marl...@gmail.com wrote:
On Wed, Apr 6, 2011 at 5:24 PM, Yang Zhang yanghates...@gmail.com wrote:
On Wed, Apr 6, 2011 at 4:22 PM, Adrian Klaver adrian.kla...@gmail.com
wrote:
On Wednesday, April 06, 2011 4:06:40 pm Yang Zhang wrote:
How do
On Wed, Apr 6, 2011 at 6:18 PM, Adrian Klaver adrian.kla...@gmail.com wrote:
On Wednesday, April 06, 2011 5:21:23 pm Yang Zhang wrote:
On Wed, Apr 6, 2011 at 4:57 PM, Scott Marlowe scott.marl...@gmail.com
wrote:
On Wed, Apr 6, 2011 at 5:24 PM, Yang Zhang yanghates...@gmail.com
wrote
Is it possible to attach or detach parts of a DB (e.g. tablespaces),
such that I can flexibly move the disks containing the DB tables
around to different hosts?
The last discussion I could find on this topic is from 2007, and the
answer was No:
Is there any tool for breaking down how much disk space is used by
(could be freed by removing) various tables, indexes, selected rows,
etc.? Thanks!
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
Any way I can have all newly created schemas/tables be owned by, or
have all permissions granted to, a certain group, without having to
remember to GRANT ALL ON [SCHEMA|TABLE] TO that group? Thanks in
advance.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes
To: Yang Zhang
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Default permissions for CREATE SCHEMA/TABLE?
* Yang Zhang (yanghates...@gmail.com) wrote:
Any way I can have all newly created schemas/tables be owned by, or
have all permissions granted to, a certain group, without having
-field
compression in analytical workloads (though I'd be curious about
transactional workloads as well)? Thanks in advance.
--
Yang Zhang
http://yz.mit.edu/
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org
the need to balance polling frequency and
latency)? Any recommended polling periods?
Aside: would 9.1's synchronous replication be adding anything that
lowers the replication delay compared to asynchronous replication? Or
could the changes be summarized as commits now wait for the standby?
--
Yang
.
--
Yang Zhang
http://yz.mit.edu/
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
and
parses client-side.
from my_pb2 import *
q=Q()
q.ParseFromString('\012\006hello?\020\000\030\000
\000*\014\012\006hello!\020\000\030\000')
print q
a: hello?
b: 0
c: 0
d: 0
e {
a: hello!
b: 0
c: 0
}
Any hints on how I can get to the bottom of this? Thanks in advance.
--
Yang Zhang
http
stored procedure languages as well?
--
Yang Zhang
http://yz.mit.edu/
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
)
- Sort (cost=8408637.34..8534662.95
rows=50410244 width=17)
Sort Key: b.transactionid
- Seq Scan on
metarelcloud_transactionlog b (cost=0.00..925543.44 rows=50410244
width=17)
(14 rows)
--
Yang Zhang
http://www.mit.edu/~y_z
with this query.
Isn't that exactly what the leaf sorts are doing? By comparison,
select * from metarelcloud_transactionlog order by transactionid
takes much, much longer in PG (it's been running for 68 minutes now,
and still going, whereas MySQL took 6 minutes).
--
Yang Zhang
http://www.mit.edu/~y_z
, but Postgresql is still running after 70
minutes. Is there something like a glaring misconfiguration that I'm
overlooking? Thanks in advance.
--
Yang Zhang
http://www.mit.edu/~y_z/
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http
have imagined an external merge-sort as being very
memory-intensive--wouldn't it only enough buffer space to read 2x and
write 1x in big-enough chunks for mostly-sequential access?
--
Yang Zhang
http://www.mit.edu/~y_z/
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make
.
CREATE INDEX i_transactionid ON public.metarelcloud_transactionlog
(transactionid);
Does an index help a sort operation in PostgreSQL?
I also share the same doubt. An external merge-sort needs to make
complete passes over the entire dataset, with no index-directed
accesses.
--
Yang Zhang
On Mon, Feb 22, 2010 at 1:48 PM, Alban Hertroys
dal...@solfertje.student.utwente.nl wrote:
On 22 Feb 2010, at 19:35, Yang Zhang wrote:
I also wouldn't have imagined an external merge-sort as being very
Where's that external merge-sort coming from? Can you show an explain analyze?
I just
On Mon, Feb 22, 2010 at 2:03 PM, Alvaro Herrera
alvhe...@commandprompt.com wrote:
Yang Zhang escribió:
I'm running:
select * from metarelcloud_transactionlog order by transactionid;
It takes MySQL 6 minutes, but Postgresql is still running after 70
minutes. Is there something like
On Mon, Feb 22, 2010 at 2:15 PM, Frank Heikens frankheik...@mac.com wrote:
Op 22 feb 2010, om 20:07 heeft Yang Zhang het volgende geschreven:
On Mon, Feb 22, 2010 at 1:48 PM, Alban Hertroys
dal...@solfertje.student.utwente.nl wrote:
On 22 Feb 2010, at 19:35, Yang Zhang wrote:
I also
On Mon, Feb 22, 2010 at 2:27 PM, Alvaro Herrera
alvhe...@commandprompt.com wrote:
Yang Zhang escribió:
I have the exact same table of data in both MySQL and Postgresql. In
Postgresql:
I just noticed two things:
[snip lots of stuff]
1.
) ENGINE=MyISAM AUTO_INCREMENT=50410166 DEFAULT
On Mon, Feb 22, 2010 at 2:39 PM, Scott Marlowe scott.marl...@gmail.com wrote:
On Mon, Feb 22, 2010 at 12:30 PM, Yang Zhang yanghates...@gmail.com wrote:
This isn't some microbenchmark. This is part of our actual analytical
application. We're running large-scale graph partitioning algorithms
On Mon, Feb 22, 2010 at 2:52 PM, Scott Marlowe scott.marl...@gmail.com wrote:
On Mon, Feb 22, 2010 at 11:10 AM, Yang Zhang yanghates...@gmail.com wrote:
I have the exact same table of data in both MySQL and Postgresql. In
Postgresql:
Just wondering, are these on the same exact machine?
Yes
On Mon, Feb 22, 2010 at 2:41 PM, Frank Heikens frankheik...@mac.com wrote:
Op 22 feb 2010, om 20:28 heeft Yang Zhang het volgende geschreven:
If your work-mem is too low there's a good chance that Postgres has to
use your disks for sorting, which will obviously be quite slow.
Relative
On Mon, Feb 22, 2010 at 3:44 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Yang Zhang yanghates...@gmail.com writes:
On Mon, Feb 22, 2010 at 1:13 PM, Pavel Stehule pavel.steh...@gmail.com
wrote:
the speed depends on setting of working_memory. Try to increase a
working_memory
It's already
.
--
Yang Zhang
http://www.mit.edu/~y_z/
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
nOn Mon, Feb 22, 2010 at 3:45 PM, Scott Marlowe
scott.marl...@gmail.com wrote:
On Mon, Feb 22, 2010 at 12:53 PM, Yang Zhang yanghates...@gmail.com wrote:
On Mon, Feb 22, 2010 at 2:52 PM, Scott Marlowe scott.marl...@gmail.com
wrote:
On Mon, Feb 22, 2010 at 11:10 AM, Yang Zhang yanghates
On Mon, Feb 22, 2010 at 9:30 PM, Alex Hunsaker bada...@gmail.com wrote:
On Mon, Feb 22, 2010 at 11:10, Yang Zhang yanghates...@gmail.com wrote:
I have the exact same table of data in both MySQL and Postgresql. In ?
Postgresql:
FWIW on a stock (unchanged postgresql.conf) 8.3.9 I get (best
On Tue, Feb 23, 2010 at 1:48 AM, Scott Marlowe scott.marl...@gmail.com wrote:
On Mon, Feb 22, 2010 at 10:51 PM, Yang Zhang yanghates...@gmail.com wrote:
nOn Mon, Feb 22, 2010 at 3:45 PM, Scott Marlowe
scott.marl...@gmail.com wrote:
What do things like vmstat 10 say while the query
72 matches
Mail list logo