On Wed, Aug 26, 2015 at 1:26 PM, Tory M Blue wrote:
>
> Right now the 100% cpu process which is this index is only using 3.5GB
> and has been for the last 15 hours
>
If 100% cpu, you can do 'sudo perf top' to see what the CPU is busy about.
Regards,
Qingqing
--
Sent via pgsql-performance mail
On Tue, Aug 4, 2015 at 8:40 PM, Ram N wrote:
>
> Thanks much for responding guys. I have tried both, building multi column
> indexes and GIST, with no improvement. I have reduced the window from 180
> days to 30 days and below are the numbers
>
> Composite index - takes 30 secs
>
> With Btree ind
On Fri, Jul 31, 2015 at 10:55 AM, Ram N wrote:
>
> Thanks Qingqing for responding. That didn't help. It in fact increased the
> scan time. Looks like a lot of time is being spent on the NestedLoop Join
> than index lookups though I am not sure how to optimize the join.
>
Good news is that optimiz
On Thu, Jul 30, 2015 at 12:51 AM, Ram N wrote:
> " -> Index Scan using end_date_idx on public.table2 b
> (cost=0.43..23181.37 rows=345833 width=52) (actual time=0.063..622.274
> rows=403936 loops=181)"
> "Output: b.serial_no, b.name, b.st, b.end_date, b.a,
> b.sta
We have to confirm the theory first: a 'perf top' sampling during two
runs shall give enough information.
Regards,
Qingqing
On Tue, Mar 31, 2015 at 8:58 AM, Kevin Viraud
wrote:
> Touche ! Thanks a lot.
>
> Looking more at the data yes it goes very often to ELSE Clause. And
> therefore reaching
""John Top-k apad"" <[EMAIL PROTECTED]> wrote
>
> from pg_stast_get_blocks_fetched i can see that both queries need almost
the
> same number of disk fetches which is quite reasonable ( the index is
> unclustered).
>
> But as you can see there is a great variation between query
> runtimes.Cansomeon
"Arjen van der Meijden" <[EMAIL PROTECTED]> wrote
>
> Long story short, we think the test is a nice example of the relatively
> lightweight, read-heavy webapplications out there and therefore decided
> to have a go at postgresql as well.
>
Some sort of web query behavior is quite optimized in MyS
""Craig A. James"" <[EMAIL PROTECTED]> wrote
> I'm having a rare but deadly problem. On our web servers, a process
> occasionally gets stuck, and can't be unstuck. Once it's stuck, all
> Postgres activities cease. "kill -9" is required to kill it --
> signals 2 and 15 don't work, and "/etc/
<[EMAIL PROTECTED]> wrote
> I have a slow sql:
> SELECT * FROM mytable WHERE id IN (1,3,5,7,3k here...);
> mytable is about 10k rows.
>
> if don't use the "IN" clause, it will cost 0,11 second, otherwise it
> will cost 2.x second
> I guess pg use linear search to deal with IN clause, is there
"Bruno Baguette" <[EMAIL PROTECTED]> wrote
>
>
> Is there a way to log all SQL queries, with the date/time when they were
> launched, and the cost of that query (if this is possible) in order to see
> which queries need to be optimized ?
>
See if log_statement, log_statement_stats parameters ca
""Rodrigo Madera"" <[EMAIL PROTECTED]> wrote
>
> The database is holding large ammounts of digital video, and I am
> wanting to put these directly into the database. What performance
> guidelines would you all give seeing my position today?
>
IMHO, if you don't need transaction semantics, don't p
""Gábriel Ákos"" <[EMAIL PROTECTED]> wrote
>
> I've got this message while heavily inserting into a database. What should
> I tune and how? It is postgresql 8.1.3.
>
> 2006-03-29 14:16:57.513 CEST:LOG: statistics buffer is full
>
Since your server is in a heavy load, so the common trick is to i
""Jim C. Nasby"" <[EMAIL PROTECTED]> wrote
>
> It's not dedicated to discussing MVCC alone, but
>
http://www.pervasive-postgres.com/lp/newsletters/2005/Insights_opensource_Dec.asp#2
> might provide you with some useful info.
> --
Another introduction is here:
http://www.postgresql.org/files/dev
"Simon Riggs" <[EMAIL PROTECTED]> wrote
> On Tue, 2006-03-21 at 06:22 -0600, Jim C. Nasby wrote:
> > Currently, it appears that SELECT * INTO new_table FROM old_table logs
> > each page as it's written to WAL. Is this actually needed? Couldn't the
> > database simply log that the SELECT ... INTO s
""Ksenia Marasanova"" <[EMAIL PROTECTED]> wrote
>
> The application uses persistant database connection, and when i check
> the status of the connection, it shows: "idle in transaction". I am
> pretty sure that every insert is being committed with explicit
> "commit()" . It always worked before..
"Marcos" <[EMAIL PROTECTED]> wrote
>
> I always think that use of * in SELECT affected in the performance,
> becoming the search slowest.
>
> But I read in the a Postgres book's that it increases the speed of
> search.
>
> And now What the more fast?
>
If you mean use "*" vs. "explicitely nam
"Mark Liberman" <[EMAIL PROTECTED]> wrote
>
> Now, my follow-up question / assumption. I am assuming that the IO time
> is
> so long on that index because it has to read the entire index (for that
> file_id) into memory
>
> any confirmation / corrections to my assumptions are greatly appreciate
"Mark Liberman" <[EMAIL PROTECTED]> wrote
>
> First run, after a night of inactivity:
>
> -> Bitmap Index Scan on 1min_events_file_id_begin_idx
> (cost=0.00..37.85 rows=3670 width=0) (actual time=313.468..313.468
> rows=11082
> loops=1)
> Index Cond:
"Greg Stark" <[EMAIL PROTECTED]> wrote
>
> If the whole database is in RAM I wouldn't expect clustering to have any
> effect. Either you're doing a lot of merge joins or a few other cases
> where
> clustering might be helping you, or the cluster is helping you keep more
> of
> the database in ra
On Thu, 22 Dec 2005, Harry Jackson wrote:
> I am currently using a dual Opteron (248) single core system (RAM
> PC3200) and for a change I am finding that the bottleneck is not disk
> I/O but CPU/RAM (not sure which). The reason for this is that the most
> frequently accessed tables/indexes are a
"Josep Maria Pinyol Fontseca" <[EMAIL PROTECTED]> wrote
>
> When we select all data in local machine, we obtain results in 2-3 seconds
> aprox. In remote connections:
>
> Postgresql 7.1 usign pgAdminII:
> Network traffic generated with remote applications is about 77-80% in a
> 10Mb connection.
"David Lang" <[EMAIL PROTECTED]> wrote
> here are the suggestions from the MySQL folks, what additional tests
> should I do.
>
I think the tests you list are enough in this stage,
Regards,
Qingqing
---(end of broadcast)---
TIP 9: In versions b
On Fri, 2 Dec 2005, David Lang wrote:
>
> I don't have all the numbers readily available (and I didn't do all the
> tests on every filesystem), but I found that even with only 1000
> files/directory ext3 had some problems, and if you enabled dir_hash some
> functions would speed up, but writing l
"David Lang" <[EMAIL PROTECTED]> wrote
>
> a few weeks ago I did a series of tests to compare different filesystems.
> the test was for a different purpose so the particulars are not what I
> woud do for testing aimed at postgres, but I think the data is relavent)
> and I saw major differences
"Jeff Frost" <[EMAIL PROTECTED]> wrote
>
> Did you folks see this article on Slashdot with a fellow requesting input
> on what sort of benchmarks to run to get a good Postgresql vs Mysql
> dataset? Perhaps this would be a good opportunity for us to get some good
> benchmarking done.
> "The har
"Guillaume Smet" <[EMAIL PROTECTED]> wrote
> [EMAIL PROTECTED] root]# iostat 10
>
> Device:tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
> sda 7.20 0.0092.00 0920
> sda1 0.00 0.00 0.00 0
""Magnus Hagander"" <[EMAIL PROTECTED]> wrote
>>
>> I'd like to use the win32 provided recv(), send() functions
>> instead of redirect them to pgwin32_recv()/pgwin32_send(),
>> just like libpq does. If we do this, we will lose some
>> functionalities, but I'd like to see the performance
>> differe
""Merlin Moncure"" <[EMAIL PROTECTED]> wrote
> ok, here is gprof output from newlines/no newlines
> [newlines]
> % cumulative self self total
> time seconds secondscalls s/call s/call name
> 19.03 0.67 0.671 0.67 3.20 MainLoop
> 17.61
On Thu, 3 Nov 2005, Tom Lane wrote:
>
> On Unix I get a dead flat line (within measurement noise), both local
> loopback and across my LAN.
>
> after 5 30.20 sec
> after 10 31.67 sec
> after 15 30.98 sec
> after 20 29.64 sec
> after 25 29.83 sec
>
Confirmed in Linux. And on
On Thu, 3 Nov 2005, Magnus Hagander wrote:
> > Both win32 send/recv have pgwin32_poll_signals() in them.
> > This is glorified WaitForSingleObjectEx on global
> > pgwin32_signal_event. This is probably part of the problem.
> > Can we work some of the same magic you put into check
> > interrupts
"Qingqing Zhou" <[EMAIL PROTECTED]> wrote
>
> Not to 100%, so this means the server is always starving. It is waiting on
> something -- of couse not lock. That's why I think there is some problem
> on network communication. Another suspect will be the write - I
""Merlin Moncure"" <[EMAIL PROTECTED]> wrote
>
> Running from remote, Time progression is:
> First 50k: 20 sec
> Second: 29 sec
> [...]
> final:: 66 sec
>
This may due to the maintainence cost of a big transaction, I am not sure
... Tom?
> so, clear upward progression of time/rec. Init
On Thu, 3 Nov 2005, Magnus Hagander wrote:
> > > Sorry, I don't follow you here - what do you mean to do? Remove the
> > > event completely so we can't wait on it?
> > >
> >
> > I'd like to use the win32 provided recv(), send() functions
> > instead of redirect them to pgwin32_recv()/pgwin32_sen
On Thu, 3 Nov 2005, Magnus Hagander wrote:
>
> Sorry, I don't follow you here - what do you mean to do? Remove the
> event completely so we can't wait on it?
>
I'd like to use the win32 provided recv(), send() functions instead of
redirect them to pgwin32_recv()/pgwin32_send(), just like libpq
On Wed, 2 Nov 2005, Merlin Moncure wrote:
> >
> > By the way, we've tried to insert into the windows database from a
> > linux psql client, via the network. In this configuration, inserting
> > is only about 2 times slower than inserting locally (the linux client
> > had a slower CPU 1700Mhz aga
"Tom Lane" <[EMAIL PROTECTED]> wrote
>
> No, that's completely irrelevant to his problem. The reason we can't do
> this is that the transformation from "x << const" to a range check on x
> is a plan-time transformation; there's no mechanism in place to do it
> at runtime. This is not easy to fix
"Robert Edmonds" <[EMAIL PROTECTED]> wrote
>
> EXPLAIN ANALYZE
> SELECT *
> FROM inet_addresses
> WHERE addr << inet('10.2.0.0/24')
>OR addr << inet('10.4.0.0/24')
>OR addr << inet('10.8.0.0/24');
>
> Bitmap Heap Scan on inet_addresses (cost=6.51..324.48 rows=1792335
> width=11) (actual
""Ahmad Fajar"" <[EMAIL PROTECTED]> wrote
> Hi Qingqing,
>
> I don't know whether the statistic got is bad or good, this is the
> statistic:
Please do it in this way:
1. Start postmaster with "stats_start_collector=true" and
"stats_block_level=true".
2. Use psql connect it, do something like t
""Ahmad Fajar"" <[EMAIL PROTECTED]> wrote
>
> Select ids, keywords from dict where keywords='blabla' ('blabla' is a
> single
> word);
>
> The table have 200 million rows, I have index the keywords field. On the
> first time my query seem to slow to get the result, about 15-60 sec to get
> the res
<[EMAIL PROTECTED]> wrote
>
> One machine is simply not going to be able to scale with the quantities
> of links we hope to store information about and we want to move to some
> kind of cluster. Because of the quantities of data, it seems to make
> sense to go for a cluster setup such that in a 4
"Brandon Black" <[EMAIL PROTECTED]> wrote ...
Increasing shared_buffers seems to always help, even out to half of
the dev box's ram (2G).
Though officially PG does not prefer huge
shared_buffers size, I did see several times that performance was
boosted in case IO is the
"Alvaro Herrera" <[EMAIL PROTECTED]> writes
>
> Interesting; do they use an overwriting storage manager like Oracle, or
> a non-overwriting one like Postgres?
>
They call this MVCC "RLV(row level versioning)". I think they use rollback
segment like Oracle (a.k.a "version store" or tempdb in SQL S
""Chun Yit(Chronos)"" <[EMAIL PROTECTED]> writes
>
>hi, i got one situation here, i create one pl/pgsql function that using
temp table to store temporary data.
>wherever i execute my function, i need to delete all the data inside the
temp table, but this will slow down the
>searching function if i
<[EMAIL PROTECTED]> writes
>
>
> so, if I do a qry like "EXPLAIN ANALYZE select * from pridecdr where
> idsede=8977758488" it tooks a lot of time before i get back any result:
>
> Index Scan using prd_id_sede on pridecdr (cost=0.00..699079.90
> rows=181850 width=138) (actual time=51.241..483068.2
"Dennis" <[EMAIL PROTECTED]> writes
>
> checking the status of connections at this point ( ps -eaf | grep
> "postgres:") where the CPU is maxed out I saw this:
>
> 127 idle
> 12 bind
> 38 parse
> 34 select
>
Are you sure 100% CPU usage is solely contributed by Postgresql? Also, from
the ps statu
"Tobias Brox" <[EMAIL PROTECTED]> writes
> Is it any way to attempt to force the planner to use some specific index
> while creating the plan? Other than eventually dropping all the other
> indices (which is obiously not a solution in production setting anyway)?
>
I don't think currently PG suppo
""Jim C. Nasby"" <[EMAIL PROTECTED]> writes
> Has anyone looked at compressing WAL's before writing to disk? On a
> system generating a lot of WAL it seems there might be some gains to be
> had WAL data could be compressed before going to disk, since today's
> machines are generally more I/O bound
So is it to make SQL parser context-sensitive - say the parser will
understand that in statement "SELECT * from LIMIT", LIMIT is just a table
name, instead of keyword?
There might be some conflicts when using Yacc, but I am not sure how
difficult will be ...
Cheers,
Qingqing
"Christopher Kings-L
"Bruce Momjian" writes
>
> Yes, we now enable FILE_FLAG_WRITE_THROUGH on Win32 for open_sync and I
> am about to open a discussion whether this should be the default for
> Win32, and whether we should backpatch this to 8.0.X.
Just a short msg: Oracle/SQL Server enable it as default in win32 *no m
""Tambet Matiisen"" <[EMAIL PROTECTED]> writes
> Hi!
>
> In one of our applications we have a database function, which
> recalculates COGS (cost of good sold) for certain period. This involves
> deleting bunch of rows from one table, inserting them again in correct
> order and updating them one-by-
50 matches
Mail list logo