Re: No commit nor Rollback button
On Tue, Oct 17, 2017 at 1:07 AM, Dave Pagewrote: > I can see why for some people who choose to turn auto-commit/auto-rollback > off they may be useful, however we cannot simply add new features every > time someone asks for something. Doing so adds maintenance costs, and > increases complexity of the UI for *everyone*. That is part of the reason > why pgAdmin III became unmaintainable; we added too many features on a whim > without giving enough thought to whether or not the added code and UI > complexity was justified, and eventually ended up with a mess of > spaghetti-code. > > So consider the lack of requests to be not so lacking anymore... One concrete advantage to the buttons, and mind you I haven't actually used pgAdmin4 but do use a GUI, is that in my GUI if you were to send the COMMIT command to the server as text any and all result set tables that are present on the current screen are removed the a new command result for the commit response replaces them. If one uses the button the result tables are left alone. Frankly, auto-commit mode can be dangerous so if you are advocating that people simply use that and forget about manually committing altogether I think you are misguided in your thinking. In the UI that I use if I send a "begin" to the server then, and only then, do the commit/rollback buttons appear (and auto-commit is disabled temporarily). With that flow your "end-user UI complexity" argument becomes significantly more specious and you are just left with "code complexity". David J.
Re: Data grid: fetching/scrolling data on user demand
On Tue, Oct 17, 2017 at 1:44 PM, Tomekwrote: > Hi, > > It is not exactly truth... In v3 the query is executed, fetched and > all rows are displayed, > >>> > >>> No they're not, though they are all transferred to the client which is > why it's slower. > >> > >> They are not what? > > > > The handling of rows in pgAdmin 3 is not as you described. > > So please tell me how is it done. > You can find the code here: https://git.postgresql.org/gitweb/?p=pgadmin3.git;a=tree;h=216f52a5135db3124268a32c553b812547d4b918;hb=216f52a5135db3124268a32c553b812547d4b918 I don't have the spare cycles to start explaining how pgAdmin 3 worked internally. > > >> What is slower - is the "display" part in both versions. You have data > from server and than You > >> push it to display. > >> I've done quick test - table 65 rows / 45 columns, query SELECT * > from table limit 10. > >> With default ON_DEMAND_RECORD_COUNT around 5 seconds, with > ON_DEMAND_RECORD_COUNT = 10 25 > >> seconds... > >> It is 20 seconds spent only on displaying... > > > > So? No human can read that quickly. > > I get the joke but it doesn't add anything to this discussion... > It wasn't a joke. > > For me this idea of "load on demand" (which in reality is "display on > demand") is pointless. It > >>> is done only because the main lag of v4 comes from interface. I don't > see any other purpose for > >>> it... If You know (and You do) that v4 can't handle > big results add > pagination like every other > >>> webapp... > >>> > >>> We did that in the first beta, and users overwhelmingly said they > didn't like or want pagination. > >>> > >>> What we have now gives users the interface they want, and presents the > data to them quickly - far > >>> more quickly than pgAdmin 3 ever did when working with larger > resultsets. > >>> > >>> If that's pointless for you, then that's fine, but other users > appreciate the speed and > >>> responsiveness. > > Part of the data... > Please explain to me what is the point o requesting 10 and getting > 1000 without possibility to access the rest? > Of course you can get all 10, you just scroll down. Saying that you can only get 1000 "without possibility to access the rest" is 100% factually incorrect. > >> I don't know of any users (we are the users) who are happy that > selecting 1 rows requires > >> dragging scrollbar five times to see 5001 record... > > > >> Saying pointless I meant that if I want 1 rows I should get 1 > rows, if I want to limit my > >> data I'll use LIMIT. But if the ui can't handle big results just give > me easiest/fastest way to get > >> to may data. > > > > Then increase ON_DEMAND_RECORD_COUNT to a higher value if that suits the > way you work. Very few > > people scroll as you suggest - if you know you want to see the 5001 > record, it's common to use > > limit/offset. If you don't know, then you're almost certainly going to > scroll page by page or > > similar, reading the results as you go - in which case, the batch > loading will speed things up for > > you as you'll have a much quicker "time to view first page". > > You can read yes? I wrote about LIMIT earlier... I'll put it in simpler > terms "If you don't know" what I mean. > > You understand that if a user selects data, user wants to get data? - that > is why he is doing select. > You understand that to get the data is to see the data? - that is why he > is doing select. > You understand that data can take more than 1000 records? > You understand that You hide the data without possibility to access it (at > least quickly)? > You understand that in this example user didn't get the data? > > I hope that above clarifies pointlessness of "display on demand" without > some other option to speed up the browsing process. > > As for increasing ON_DEMAND_RECORD_COUNT - I would gladly do it but then > displaying 10 records takes more time and about 500 Mb of memory (v3 > used around 80)... What is more in v4 I can't get 30 records even with > default ON_DEMAND_RECORD_COUNT (canceled query after 5 minutes) while v3 > managed to do it in 80 seconds. > > I understand that large results are rare but this is a management tool not > some office app... It should do what I want... > > And again You are telling me what I want and what I need, and how I should > do it... You decided to make this tool in a way that no other db management > works, No - I'm telling you what nearly 20 years of interacting with pgAdmin users has indicated they likely want. If it's not what you want, then you are free to look at other tools. We do our best to meet the requirements of the majority of our users, but it's obviously impossible to meet everyones requirements. > You cut out a lot of v3 features, Yes. Did you ever create an operator family? Or use the graphical query designer that crashed all the time? Those and many other features were intentionally removed. Some have been put back in
Re: Data grid: fetching/scrolling data on user demand
On Tue, Oct 17, 2017 at 11:35 AM, Tomekwrote: > Hi, > > >> It is not exactly truth... In v3 the query is executed, fetched and all > rows are displayed, > > > > No they're not, though they are all transferred to the client which is > why it's slower. > > They are not what? The handling of rows in pgAdmin 3 is not as you described. > What is slower - is the "display" part in both versions. You have data > from server and than You push it to display. > I've done quick test - table 65 rows / 45 columns, query SELECT * from > table limit 10. > With default ON_DEMAND_RECORD_COUNT around 5 seconds, with > ON_DEMAND_RECORD_COUNT = 10 25 seconds... > It is 20 seconds spent only on displaying... > So? No human can read that quickly. > > >> For me this idea of "load on demand" (which in reality is "display on > demand") is pointless. It > > is done only because the main lag of v4 comes from interface. I don't > see any other purpose for > > it... If You know (and You do) that v4 can't handle > big results add > pagination like every other > > webapp... > > > > We did that in the first beta, and users overwhelmingly said they didn't > like or want pagination. > > > > What we have now gives users the interface they want, and presents the > data to them quickly - far > > more quickly than pgAdmin 3 ever did when working with larger resultsets. > > > > If that's pointless for you, then that's fine, but other users > appreciate the speed and > > responsiveness. > > I don't know of any users (we are the users) who are happy that selecting > 1 rows requires dragging scrollbar five times to see 5001 record... > Saying pointless I meant that if I want 1 rows I should get 1 > rows, if I want to limit my data I'll use LIMIT. But if the ui can't handle > big results just give me easiest/fastest way to get to may data. > Then increase ON_DEMAND_RECORD_COUNT to a higher value if that suits the way you work. Very few people scroll as you suggest - if you know you want to see the 5001 record, it's common to use limit/offset. If you don't know, then you're almost certainly going to scroll page by page or similar, reading the results as you go - in which case, the batch loading will speed things up for you as you'll have a much quicker "time to view first page". -- Dave Page Blog: http://pgsnake.blogspot.com Twitter: @pgsnake EnterpriseDB UK: http://www.enterprisedb.com The Enterprise PostgreSQL Company
Faded text in pgAdmin 4 2.0/Win Server 2008r2
Must be something wrong in my setup: Windows Server 2008 r2 TS, acessing with two Full HD 24" monitors in 32bit color depth., themes disabled, font smoothing enabled, composition enabled, visual effects disabled. I just can't read texts... They are "faded". See attached screenshot for reference. I'll appreciate your help. Regards, Edson Richter
Re: Data grid: fetching/scrolling data on user demand
Hi, >> It is not exactly truth... In v3 the query is executed, fetched and all rows >> are displayed, > > No they're not, though they are all transferred to the client which is why > it's slower. They are not what? What is slower - is the "display" part in both versions. You have data from server and than You push it to display. I've done quick test - table 65 rows / 45 columns, query SELECT * from table limit 10. With default ON_DEMAND_RECORD_COUNT around 5 seconds, with ON_DEMAND_RECORD_COUNT = 10 25 seconds... It is 20 seconds spent only on displaying... >> For me this idea of "load on demand" (which in reality is "display on >> demand") is pointless. It > is done only because the main lag of v4 comes from interface. I don't see any > other purpose for > it... If You know (and You do) that v4 can't handle > big results add > pagination like every other > webapp... > > We did that in the first beta, and users overwhelmingly said they didn't like > or want pagination. > > What we have now gives users the interface they want, and presents the data > to them quickly - far > more quickly than pgAdmin 3 ever did when working with larger resultsets. > > If that's pointless for you, then that's fine, but other users appreciate the > speed and > responsiveness. I don't know of any users (we are the users) who are happy that selecting 1 rows requires dragging scrollbar five times to see 5001 record... Saying pointless I meant that if I want 1 rows I should get 1 rows, if I want to limit my data I'll use LIMIT. But if the ui can't handle big results just give me easiest/fastest way to get to may data. -- Tomek
Re: Data grid: fetching/scrolling data on user demand
Hi Tomek, On Tue, Oct 17, 2017 at 3:21 PM, Tomekwrote: > Hi, > > > As I mentioned in my previous email that we do not use server side > cursor, so it won't add any > > limit on query. > > > > The delay is from database driver itself, it has nothing to do with > pgAdmin4. > > Try executing the same query in 'psql', 'pgAdmin3' and third party tool > which use libpq library as > > backend, you will observe the same behaviour. > > It is not exactly truth... In v3 the query is executed, fetched and all > rows are displayed, in v4 query is executed, fetched but only 1000 records > are displayed. > My answer was not in regards of fetching & displaying the data on UI but it was about the execution time of a query. My point was unless we get the result from database driver itself, we can not fetch it. > For me this idea of "load on demand" (which in reality is "display on > demand") is pointless. It is done only because the main lag of v4 comes > from interface. I don't see any other purpose for it... If You know (and > You do) that v4 can't handle big results add pagination like every other > webapp... > > And by the way You have a big leak in query tool - execute query with > several thousands rows, scroll above 1000 mark few times, execute the same > query, scroll above 1000 mark few times - repeat until You run out of > memory or v4 crashes... > Thanks for reporting, I'll look into this. > > -- > Tomek > >
Re: Data grid: fetching/scrolling data on user demand
On Tue, Oct 17, 2017 at 10:51 AM, Tomekwrote: > Hi, > > > As I mentioned in my previous email that we do not use server side > cursor, so it won't add any > > limit on query. > > > > The delay is from database driver itself, it has nothing to do with > pgAdmin4. > > Try executing the same query in 'psql', 'pgAdmin3' and third party tool > which use libpq library as > > backend, you will observe the same behaviour. > > It is not exactly truth... In v3 the query is executed, fetched and all > rows are displayed, No they're not, though they are all transferred to the client which is why it's slower. > For me this idea of "load on demand" (which in reality is "display on demand") is pointless. It is done only because the main lag of v4 comes from interface. I don't see any other purpose for it... If You know (and You do) that v4 can't handle > big results add pagination like every other webapp... We did that in the first beta, and users overwhelmingly said they didn't like or want pagination. What we have now gives users the interface they want, and presents the data to them quickly - far more quickly than pgAdmin 3 ever did when working with larger resultsets. If that's pointless for you, then that's fine, but other users appreciate the speed and responsiveness. -- Dave Page Blog: http://pgsnake.blogspot.com Twitter: @pgsnake EnterpriseDB UK: http://www.enterprisedb.com The Enterprise PostgreSQL Company
Re: Data grid: fetching/scrolling data on user demand
Hi, > As I mentioned in my previous email that we do not use server side cursor, so > it won't add any > limit on query. > > The delay is from database driver itself, it has nothing to do with pgAdmin4. > Try executing the same query in 'psql', 'pgAdmin3' and third party tool which > use libpq library as > backend, you will observe the same behaviour. It is not exactly truth... In v3 the query is executed, fetched and all rows are displayed, in v4 query is executed, fetched but only 1000 records are displayed. For me this idea of "load on demand" (which in reality is "display on demand") is pointless. It is done only because the main lag of v4 comes from interface. I don't see any other purpose for it... If You know (and You do) that v4 can't handle big results add pagination like every other webapp... And by the way You have a big leak in query tool - execute query with several thousands rows, scroll above 1000 mark few times, execute the same query, scroll above 1000 mark few times - repeat until You run out of memory or v4 crashes... -- Tomek
Re: Data grid: fetching/scrolling data on user demand
On Tue, Oct 17, 2017 at 9:36 AM, legrand legrand < legrand_legr...@hotmail.com> wrote: > Pgadmin doesn't have to Wait for all the data, > As he should only load/fetch the first 1000 rows. > > Loading all the data in memory will not be possible for big datasets. > > This is a design error at my point of view. > pgAdmin doesn't wait for all the data. It does load/fetch only the first 1000 rows. However, it cannot do anything until Postgres makes data available. -- Dave Page Blog: http://pgsnake.blogspot.com Twitter: @pgsnake EnterpriseDB UK: http://www.enterprisedb.com The Enterprise PostgreSQL Company
Re: Data grid: fetching/scrolling data on user demand
Pgadmin doesn't have to Wait for all the data, As he should only load/fetch the first 1000 rows. Loading all the data in memory will not be possible for big datasets. This is a design error at my point of view. PAscal SQLeo projection manager -- Sent from: http://www.postgresql-archive.org/PostgreSQL-pgadmin-support-f2191615.html
Re: Data grid: fetching/scrolling data on user demand
On Tue, Oct 17, 2017 at 6:36 AM, Murtuza Zabuawala < murtuza.zabuaw...@enterprisedb.com> wrote: > > On Tue, Oct 17, 2017 at 2:22 AM, legrand legrand < > legrand_legr...@hotmail.com> wrote: > >> How long does it take in your environnment >> to fetch the 1000 first records from >> >> select * from information_schema.columns a,information_schema.columns b >> > I didn't run it because on my environment just count > from information_schema.columns gave me 7325 records (Cross join will be 7 > 325 > * 7 > 325 > records , and obviously that will take huge amount of time > ) > > > Right - exactly as it does in psql (the command line interface). pgAdmin can't make the database engine faster - all it can do is retrieve and display the results as efficiently as possible, once the server makes them available. -- Dave Page Blog: http://pgsnake.blogspot.com Twitter: @pgsnake EnterpriseDB UK: http://www.enterprisedb.com The Enterprise PostgreSQL Company
Re: No commit nor Rollback button
On Tue, Oct 17, 2017 at 6:26 AM, Murtuza Zabuawala < murtuza.zabuaw...@enterprisedb.com> wrote: > > On Tue, Oct 17, 2017 at 2:03 AM, legrand legrand < > legrand_legr...@hotmail.com> wrote: > >> Hi Murtuza, >> >> I found those options for switching between autocommit mode and manual >> mode. >> >> What I suggest here is to add one button for commit and one button for >> rollback, >> they would be green and red in manual transaction mode >> they would be greyed in autocommit mode. >> >> A more advanced behavior in manual mode, is to let them greyed as long as >> there is no modification done in the transaction. >> > Oh okay, that functionality is not available. > > In my personal opinion, I don't see requested buttons for commit/rollback > as frequently used feature by most user when you have auto commit/rollback > mode available. > I agree - and based on the lack of requests over the years, I believe most users would too. I can see why for some people who choose to turn auto-commit/auto-rollback off they may be useful, however we cannot simply add new features every time someone asks for something. Doing so adds maintenance costs, and increases complexity of the UI for *everyone*. That is part of the reason why pgAdmin III became unmaintainable; we added too many features on a whim without giving enough thought to whether or not the added code and UI complexity was justified, and eventually ended up with a mess of spaghetti-code. -- Dave Page Blog: http://pgsnake.blogspot.com Twitter: @pgsnake EnterpriseDB UK: http://www.enterprisedb.com The Enterprise PostgreSQL Company