Re: Copying DB data from Heroku Postgresql to local server

2013-01-25 Thread Peter van Hardenberg
Okay! Tom Lane has now fixed this and future Postgres releases will not
exhibit this behaviour. The commit should land shortly. Thanks for the
report!


On Mon, Jan 21, 2013 at 11:04 PM, Peter van Hardenberg wrote:

> Thanks for the extra reply Nickolay. I think I came up with a pretty
> minimal test-case based on your example.
>
>
> On Mon, Jan 21, 2013 at 10:18 PM, Nickolay Kolev wrote:
>
>> I looked at the thread at pgsql-hackers. It seems that Tom Lane
>> overlooked the dump/restore part.
>>
>> A picture is worth a thousand words:
>>
>> http://grozdova.com/psql-missing-relation-after-dump-restore.png
>>
>> I am not sure if this is intended behaviour or not, but it should become
>> clear at the thread there.
>>
>>
>> On Tuesday, January 22, 2013 12:30:44 AM UTC+1, Peter van Hardenberg
>> wrote:
>>
>>> So here's the deal - when you prepare a statement, Postgres compiles it
>>> down internally for performance. This means that it doesn't target the
>>> table by name anymore, but by it's internal "OID". This guarantees nice
>>> properties like query stability, but apparently doesn't play well with
>>> schema changes. One could argue that this is a Postgres bug or defect, but
>>> it could definitely be worked around in Rails.
>>>
>>> I've opened a thread on the pgsql-hackers list about this issue, but
>>> it's not terribly likely it will make the 9.3 release nor is it so dire as
>>> to warrant a point release, so the world will likely have to live with this
>>> defect for another 18 months at least.
>>>
>>> -p
>>>
>>>
>>> On Mon, Jan 21, 2013 at 2:20 PM, Keenan Brock wrote:
>>>
 I wonder if your functions will fail as well.

 Will vacuum or statistics recompiles all the stored procedures and
 functions?
 Google didn't show me any more information on this one.


 I remember in Sybase, changing the statistics on a table too much used
 to reek havoc, slowing queries down by over 50x. Used to have to "EXEC
 myProcedure WITH RECOMPILE" to tell the query optimizer to use the latest
 statistics. Also used to rebuild the statistics every night. But I think
 this is already done by Postgres.

 FWIW/
 Keenan

  On Monday, January 21, 2013 at 4:59 PM, Nickolay Kolev wrote:

 The crazy idea works! It was indeed because of prepared statements.

 I will look into why prepared statements do not work after a table is
 dropped and recreated. Should be in postgresql_adapter.rb somewhere.

 Pulling the carpet under Rails' feet and putting it back exactly as it
 was before (identical table names) should not cause prepared statements to
 fail, unless they checksum the tables by something other than their names
 in some way.

 Thanks a lot for the hint, Peter!

 On Monday, January 21, 2013 10:08:25 PM UTC+1, Peter van Hardenberg
 wrote:

 crazy idea: try disabling prepared statements.

 rationale: you might have prepared statements enabled, which is the
 default, which would mean that instead of targeting the table by name it
 would be compiled into an internal object ID which would change when you
 run the restore since it drops and recreates the tables.


 On Mon, Jan 21, 2013 at 1:06 PM, Peter van Hardenberg 
 wrote:

 pgbackups makes completely standard dumps, but you could use a local
 pg_dump to prove equivalency. Doing a restore sounds like it's dropping and
 recreating all the tables. Perhaps there's some kind of magic that makes
 the migrations work which doesn't get triggered in your dump/restore case.


 On Mon, Jan 21, 2013 at 11:44 AM, Nickolay Kolev wrote:

 I don't think this is it. Even if there are no schema changes, the same
 behaviour can be observed. Actually Rails *will* pick up schema changes
 (e.g. as introduced by migrations) when running in development mode.

 I have only seen this with Postgres and only when loading a dump. If
 true for all dumps or only the ones created by pgbackups I am not sure.

 --
 You received this message because you are subscribed to the Google
 Groups "Heroku" group.

 To unsubscribe from this group, send email to
 heroku+un...@**googlegroups.com
 For more options, visit this group at
 http://groups.google.com/**group**/heroku?hl=en_US?hl=en



  --
 You received this message because you are subscribed to the Google
 Groups "Heroku" group.

 To unsubscribe from this group, send email to
 heroku+un...@**googlegroups.com
 For more options, visit this group at
 http://groups.google.com/**group/heroku?hl=en_US?hl=en


  --
 You received this message because you are subscribed to the Google
 Groups "Heroku" group.

 To unsubscribe from this gr

Re: Copying DB data from Heroku Postgresql to local server

2013-01-21 Thread Peter van Hardenberg
Thanks for the extra reply Nickolay. I think I came up with a pretty
minimal test-case based on your example.


On Mon, Jan 21, 2013 at 10:18 PM, Nickolay Kolev  wrote:

> I looked at the thread at pgsql-hackers. It seems that Tom Lane overlooked
> the dump/restore part.
>
> A picture is worth a thousand words:
>
> http://grozdova.com/psql-missing-relation-after-dump-restore.png
>
> I am not sure if this is intended behaviour or not, but it should become
> clear at the thread there.
>
>
> On Tuesday, January 22, 2013 12:30:44 AM UTC+1, Peter van Hardenberg wrote:
>
>> So here's the deal - when you prepare a statement, Postgres compiles it
>> down internally for performance. This means that it doesn't target the
>> table by name anymore, but by it's internal "OID". This guarantees nice
>> properties like query stability, but apparently doesn't play well with
>> schema changes. One could argue that this is a Postgres bug or defect, but
>> it could definitely be worked around in Rails.
>>
>> I've opened a thread on the pgsql-hackers list about this issue, but it's
>> not terribly likely it will make the 9.3 release nor is it so dire as to
>> warrant a point release, so the world will likely have to live with this
>> defect for another 18 months at least.
>>
>> -p
>>
>>
>> On Mon, Jan 21, 2013 at 2:20 PM, Keenan Brock wrote:
>>
>>> I wonder if your functions will fail as well.
>>>
>>> Will vacuum or statistics recompiles all the stored procedures and
>>> functions?
>>> Google didn't show me any more information on this one.
>>>
>>>
>>> I remember in Sybase, changing the statistics on a table too much used
>>> to reek havoc, slowing queries down by over 50x. Used to have to "EXEC
>>> myProcedure WITH RECOMPILE" to tell the query optimizer to use the latest
>>> statistics. Also used to rebuild the statistics every night. But I think
>>> this is already done by Postgres.
>>>
>>> FWIW/
>>> Keenan
>>>
>>>  On Monday, January 21, 2013 at 4:59 PM, Nickolay Kolev wrote:
>>>
>>> The crazy idea works! It was indeed because of prepared statements.
>>>
>>> I will look into why prepared statements do not work after a table is
>>> dropped and recreated. Should be in postgresql_adapter.rb somewhere.
>>>
>>> Pulling the carpet under Rails' feet and putting it back exactly as it
>>> was before (identical table names) should not cause prepared statements to
>>> fail, unless they checksum the tables by something other than their names
>>> in some way.
>>>
>>> Thanks a lot for the hint, Peter!
>>>
>>> On Monday, January 21, 2013 10:08:25 PM UTC+1, Peter van Hardenberg
>>> wrote:
>>>
>>> crazy idea: try disabling prepared statements.
>>>
>>> rationale: you might have prepared statements enabled, which is the
>>> default, which would mean that instead of targeting the table by name it
>>> would be compiled into an internal object ID which would change when you
>>> run the restore since it drops and recreates the tables.
>>>
>>>
>>> On Mon, Jan 21, 2013 at 1:06 PM, Peter van Hardenberg 
>>> wrote:
>>>
>>> pgbackups makes completely standard dumps, but you could use a local
>>> pg_dump to prove equivalency. Doing a restore sounds like it's dropping and
>>> recreating all the tables. Perhaps there's some kind of magic that makes
>>> the migrations work which doesn't get triggered in your dump/restore case.
>>>
>>>
>>> On Mon, Jan 21, 2013 at 11:44 AM, Nickolay Kolev wrote:
>>>
>>> I don't think this is it. Even if there are no schema changes, the same
>>> behaviour can be observed. Actually Rails *will* pick up schema changes
>>> (e.g. as introduced by migrations) when running in development mode.
>>>
>>> I have only seen this with Postgres and only when loading a dump. If
>>> true for all dumps or only the ones created by pgbackups I am not sure.
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Heroku" group.
>>>
>>> To unsubscribe from this group, send email to
>>> heroku+un...@**googlegroups.com
>>> For more options, visit this group at
>>> http://groups.google.com/**group**/heroku?hl=en_US?hl=en
>>>
>>>
>>>
>>>  --
>>> You received this message because you are subscribed to the Google
>>> Groups "Heroku" group.
>>>
>>> To unsubscribe from this group, send email to
>>> heroku+un...@**googlegroups.com
>>> For more options, visit this group at
>>> http://groups.google.com/**group/heroku?hl=en_US?hl=en
>>>
>>>
>>>  --
>>> You received this message because you are subscribed to the Google
>>> Groups "Heroku" group.
>>>
>>> To unsubscribe from this group, send email to
>>> heroku+un...@**googlegroups.com
>>> For more options, visit this group at
>>> http://groups.google.com/**group/heroku?hl=en_US?hl=en
>>>
>>
>>  --
> You received this message because you are subscribed to the Google
> Groups "Heroku" group.
>
> To unsubscribe from 

Re: Copying DB data from Heroku Postgresql to local server

2013-01-21 Thread Nickolay Kolev
I looked at the thread at pgsql-hackers. It seems that Tom Lane overlooked 
the dump/restore part.

A picture is worth a thousand words:

http://grozdova.com/psql-missing-relation-after-dump-restore.png

I am not sure if this is intended behaviour or not, but it should become 
clear at the thread there.

On Tuesday, January 22, 2013 12:30:44 AM UTC+1, Peter van Hardenberg wrote:
>
> So here's the deal - when you prepare a statement, Postgres compiles it 
> down internally for performance. This means that it doesn't target the 
> table by name anymore, but by it's internal "OID". This guarantees nice 
> properties like query stability, but apparently doesn't play well with 
> schema changes. One could argue that this is a Postgres bug or defect, but 
> it could definitely be worked around in Rails.
>
> I've opened a thread on the pgsql-hackers list about this issue, but it's 
> not terribly likely it will make the 9.3 release nor is it so dire as to 
> warrant a point release, so the world will likely have to live with this 
> defect for another 18 months at least.
>
> -p
>
>
> On Mon, Jan 21, 2013 at 2:20 PM, Keenan Brock 
> 
> > wrote:
>
>> I wonder if your functions will fail as well.
>>
>> Will vacuum or statistics recompiles all the stored procedures and 
>> functions?
>> Google didn't show me any more information on this one.
>>
>>
>> I remember in Sybase, changing the statistics on a table too much used to 
>> reek havoc, slowing queries down by over 50x. Used to have to "EXEC 
>> myProcedure WITH RECOMPILE" to tell the query optimizer to use the latest 
>> statistics. Also used to rebuild the statistics every night. But I think 
>> this is already done by Postgres.
>>
>> FWIW/
>> Keenan
>>
>>  On Monday, January 21, 2013 at 4:59 PM, Nickolay Kolev wrote:
>>
>> The crazy idea works! It was indeed because of prepared statements.
>>
>> I will look into why prepared statements do not work after a table is 
>> dropped and recreated. Should be in postgresql_adapter.rb somewhere.
>>
>> Pulling the carpet under Rails' feet and putting it back exactly as it 
>> was before (identical table names) should not cause prepared statements to 
>> fail, unless they checksum the tables by something other than their names 
>> in some way.
>>
>> Thanks a lot for the hint, Peter!
>>
>> On Monday, January 21, 2013 10:08:25 PM UTC+1, Peter van Hardenberg wrote:
>>
>> crazy idea: try disabling prepared statements.
>>
>> rationale: you might have prepared statements enabled, which is the 
>> default, which would mean that instead of targeting the table by name it 
>> would be compiled into an internal object ID which would change when you 
>> run the restore since it drops and recreates the tables.
>>
>>
>> On Mon, Jan 21, 2013 at 1:06 PM, Peter van Hardenberg wrote:
>>
>> pgbackups makes completely standard dumps, but you could use a local 
>> pg_dump to prove equivalency. Doing a restore sounds like it's dropping and 
>> recreating all the tables. Perhaps there's some kind of magic that makes 
>> the migrations work which doesn't get triggered in your dump/restore case.
>>  
>>
>> On Mon, Jan 21, 2013 at 11:44 AM, Nickolay Kolev wrote:
>>
>> I don't think this is it. Even if there are no schema changes, the same 
>> behaviour can be observed. Actually Rails *will* pick up schema changes 
>> (e.g. as introduced by migrations) when running in development mode.
>>
>> I have only seen this with Postgres and only when loading a dump. If true 
>> for all dumps or only the ones created by pgbackups I am not sure.
>>
>> -- 
>> You received this message because you are subscribed to the Google
>> Groups "Heroku" group.
>>  
>> To unsubscribe from this group, send email to
>> heroku+un...@**googlegroups.com
>> For more options, visit this group at
>> http://groups.google.com/**group/heroku?hl=en_US?hl=en
>>
>>
>>
>>  -- 
>> You received this message because you are subscribed to the Google
>> Groups "Heroku" group.
>>  
>> To unsubscribe from this group, send email to
>> heroku+un...@googlegroups.com 
>> For more options, visit this group at
>> http://groups.google.com/group/heroku?hl=en_US?hl=en
>>  
>>  
>>  -- 
>> You received this message because you are subscribed to the Google
>> Groups "Heroku" group.
>>  
>> To unsubscribe from this group, send email to
>> heroku+un...@googlegroups.com 
>> For more options, visit this group at
>> http://groups.google.com/group/heroku?hl=en_US?hl=en
>>
>
>

-- 
You received this message because you are subscribed to the Google
Groups "Heroku" group.

To unsubscribe from this group, send email to
heroku+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/heroku?hl=en_US?hl=en


Re: Copying DB data from Heroku Postgresql to local server

2013-01-21 Thread Peter van Hardenberg
So here's the deal - when you prepare a statement, Postgres compiles it
down internally for performance. This means that it doesn't target the
table by name anymore, but by it's internal "OID". This guarantees nice
properties like query stability, but apparently doesn't play well with
schema changes. One could argue that this is a Postgres bug or defect, but
it could definitely be worked around in Rails.

I've opened a thread on the pgsql-hackers list about this issue, but it's
not terribly likely it will make the 9.3 release nor is it so dire as to
warrant a point release, so the world will likely have to live with this
defect for another 18 months at least.

-p


On Mon, Jan 21, 2013 at 2:20 PM, Keenan Brock  wrote:

> I wonder if your functions will fail as well.
>
> Will vacuum or statistics recompiles all the stored procedures and
> functions?
> Google didn't show me any more information on this one.
>
>
> I remember in Sybase, changing the statistics on a table too much used to
> reek havoc, slowing queries down by over 50x. Used to have to "EXEC
> myProcedure WITH RECOMPILE" to tell the query optimizer to use the latest
> statistics. Also used to rebuild the statistics every night. But I think
> this is already done by Postgres.
>
> FWIW/
> Keenan
>
>  On Monday, January 21, 2013 at 4:59 PM, Nickolay Kolev wrote:
>
> The crazy idea works! It was indeed because of prepared statements.
>
> I will look into why prepared statements do not work after a table is
> dropped and recreated. Should be in postgresql_adapter.rb somewhere.
>
> Pulling the carpet under Rails' feet and putting it back exactly as it was
> before (identical table names) should not cause prepared statements to
> fail, unless they checksum the tables by something other than their names
> in some way.
>
> Thanks a lot for the hint, Peter!
>
> On Monday, January 21, 2013 10:08:25 PM UTC+1, Peter van Hardenberg wrote:
>
> crazy idea: try disabling prepared statements.
>
> rationale: you might have prepared statements enabled, which is the
> default, which would mean that instead of targeting the table by name it
> would be compiled into an internal object ID which would change when you
> run the restore since it drops and recreates the tables.
>
>
> On Mon, Jan 21, 2013 at 1:06 PM, Peter van Hardenberg wrote:
>
> pgbackups makes completely standard dumps, but you could use a local
> pg_dump to prove equivalency. Doing a restore sounds like it's dropping and
> recreating all the tables. Perhaps there's some kind of magic that makes
> the migrations work which doesn't get triggered in your dump/restore case.
>
>
> On Mon, Jan 21, 2013 at 11:44 AM, Nickolay Kolev  wrote:
>
> I don't think this is it. Even if there are no schema changes, the same
> behaviour can be observed. Actually Rails *will* pick up schema changes
> (e.g. as introduced by migrations) when running in development mode.
>
> I have only seen this with Postgres and only when loading a dump. If true
> for all dumps or only the ones created by pgbackups I am not sure.
>
> --
> You received this message because you are subscribed to the Google
> Groups "Heroku" group.
>
> To unsubscribe from this group, send email to
> heroku+un...@**googlegroups.com
> For more options, visit this group at
> http://groups.google.com/**group/heroku?hl=en_US?hl=en
>
>
>
>  --
> You received this message because you are subscribed to the Google
> Groups "Heroku" group.
>
> To unsubscribe from this group, send email to
> heroku+unsubscr...@googlegroups.com
> For more options, visit this group at
> http://groups.google.com/group/heroku?hl=en_US?hl=en
>
>
>  --
> You received this message because you are subscribed to the Google
> Groups "Heroku" group.
>
> To unsubscribe from this group, send email to
> heroku+unsubscr...@googlegroups.com
> For more options, visit this group at
> http://groups.google.com/group/heroku?hl=en_US?hl=en
>

-- 
You received this message because you are subscribed to the Google
Groups "Heroku" group.

To unsubscribe from this group, send email to
heroku+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/heroku?hl=en_US?hl=en


Re: Copying DB data from Heroku Postgresql to local server

2013-01-21 Thread Keenan Brock
I wonder if your functions will fail as well.

Will vacuum or statistics recompiles all the stored procedures and functions?
Google didn't show me any more information on this one.


I remember in Sybase, changing the statistics on a table too much used to reek 
havoc, slowing queries down by over 50x. Used to have to "EXEC myProcedure WITH 
RECOMPILE" to tell the query optimizer to use the latest statistics. Also used 
to rebuild the statistics every night. But I think this is already done by 
Postgres.

FWIW/
Keenan


On Monday, January 21, 2013 at 4:59 PM, Nickolay Kolev wrote:

> The crazy idea works! It was indeed because of prepared statements.
> 
> I will look into why prepared statements do not work after a table is dropped 
> and recreated. Should be in postgresql_adapter.rb somewhere.
> 
> Pulling the carpet under Rails' feet and putting it back exactly as it was 
> before (identical table names) should not cause prepared statements to fail, 
> unless they checksum the tables by something other than their names in some 
> way.
> 
> Thanks a lot for the hint, Peter!
> 
> On Monday, January 21, 2013 10:08:25 PM UTC+1, Peter van Hardenberg wrote:
> > crazy idea: try disabling prepared statements.
> > 
> > rationale: you might have prepared statements enabled, which is the 
> > default, which would mean that instead of targeting the table by name it 
> > would be compiled into an internal object ID which would change when you 
> > run the restore since it drops and recreates the tables. 
> > 
> > 
> > On Mon, Jan 21, 2013 at 1:06 PM, Peter van Hardenberg  > (javascript:)> wrote:
> > > pgbackups makes completely standard dumps, but you could use a local 
> > > pg_dump to prove equivalency. Doing a restore sounds like it's dropping 
> > > and recreating all the tables. Perhaps there's some kind of magic that 
> > > makes the migrations work which doesn't get triggered in your 
> > > dump/restore case. 
> > > 
> > > 
> > > On Mon, Jan 21, 2013 at 11:44 AM, Nickolay Kolev  > > (javascript:)> wrote:
> > > > I don't think this is it. Even if there are no schema changes, the same 
> > > > behaviour can be observed. Actually Rails *will* pick up schema changes 
> > > > (e.g. as introduced by migrations) when running in development mode.
> > > > 
> > > > I have only seen this with Postgres and only when loading a dump. If 
> > > > true for all dumps or only the ones created by pgbackups I am not sure.
> > > > -- 
> > > > You received this message because you are subscribed to the Google
> > > > Groups "Heroku" group.
> > > >  
> > > > To unsubscribe from this group, send email to
> > > > heroku+un...@googlegroups.com (javascript:)
> > > > For more options, visit this group at
> > > > http://groups.google.com/group/heroku?hl=en_US?hl=en
> > > 
> > 
> -- 
> You received this message because you are subscribed to the Google
> Groups "Heroku" group.
>  
> To unsubscribe from this group, send email to
> heroku+unsubscr...@googlegroups.com 
> (mailto:heroku+unsubscr...@googlegroups.com)
> For more options, visit this group at
> http://groups.google.com/group/heroku?hl=en_US?hl=en

-- 
You received this message because you are subscribed to the Google
Groups "Heroku" group.

To unsubscribe from this group, send email to
heroku+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/heroku?hl=en_US?hl=en


Re: Copying DB data from Heroku Postgresql to local server

2013-01-21 Thread Nickolay Kolev
The crazy idea works! It was indeed because of prepared statements.

I will look into why prepared statements do not work after a table is 
dropped and recreated. Should be in postgresql_adapter.rb somewhere.

Pulling the carpet under Rails' feet and putting it back exactly as it was 
before (identical table names) should not cause prepared statements to 
fail, unless they checksum the tables by something other than their names 
in some way.

Thanks a lot for the hint, Peter!

On Monday, January 21, 2013 10:08:25 PM UTC+1, Peter van Hardenberg wrote:
>
> crazy idea: try disabling prepared statements.
>
> rationale: you might have prepared statements enabled, which is the 
> default, which would mean that instead of targeting the table by name it 
> would be compiled into an internal object ID which would change when you 
> run the restore since it drops and recreates the tables.
>
>
> On Mon, Jan 21, 2013 at 1:06 PM, Peter van Hardenberg 
> 
> > wrote:
>
>> pgbackups makes completely standard dumps, but you could use a local 
>> pg_dump to prove equivalency. Doing a restore sounds like it's dropping and 
>> recreating all the tables. Perhaps there's some kind of magic that makes 
>> the migrations work which doesn't get triggered in your dump/restore case.
>>  
>>
>> On Mon, Jan 21, 2013 at 11:44 AM, Nickolay Kolev 
>> 
>> > wrote:
>>
>>> I don't think this is it. Even if there are no schema changes, the same 
>>> behaviour can be observed. Actually Rails *will* pick up schema changes 
>>> (e.g. as introduced by migrations) when running in development mode.
>>>
>>> I have only seen this with Postgres and only when loading a dump. If 
>>> true for all dumps or only the ones created by pgbackups I am not sure.
>>>
>>> -- 
>>> You received this message because you are subscribed to the Google
>>> Groups "Heroku" group.
>>>  
>>> To unsubscribe from this group, send email to
>>> heroku+un...@googlegroups.com 
>>> For more options, visit this group at
>>> http://groups.google.com/group/heroku?hl=en_US?hl=en
>>>
>>
>>
>

-- 
You received this message because you are subscribed to the Google
Groups "Heroku" group.

To unsubscribe from this group, send email to
heroku+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/heroku?hl=en_US?hl=en


Re: Copying DB data from Heroku Postgresql to local server

2013-01-21 Thread Peter van Hardenberg
crazy idea: try disabling prepared statements.

rationale: you might have prepared statements enabled, which is the
default, which would mean that instead of targeting the table by name it
would be compiled into an internal object ID which would change when you
run the restore since it drops and recreates the tables.


On Mon, Jan 21, 2013 at 1:06 PM, Peter van Hardenberg wrote:

> pgbackups makes completely standard dumps, but you could use a local
> pg_dump to prove equivalency. Doing a restore sounds like it's dropping and
> recreating all the tables. Perhaps there's some kind of magic that makes
> the migrations work which doesn't get triggered in your dump/restore case.
>
>
> On Mon, Jan 21, 2013 at 11:44 AM, Nickolay Kolev wrote:
>
>> I don't think this is it. Even if there are no schema changes, the same
>> behaviour can be observed. Actually Rails *will* pick up schema changes
>> (e.g. as introduced by migrations) when running in development mode.
>>
>> I have only seen this with Postgres and only when loading a dump. If true
>> for all dumps or only the ones created by pgbackups I am not sure.
>>
>> --
>> You received this message because you are subscribed to the Google
>> Groups "Heroku" group.
>>
>> To unsubscribe from this group, send email to
>> heroku+unsubscr...@googlegroups.com
>> For more options, visit this group at
>> http://groups.google.com/group/heroku?hl=en_US?hl=en
>>
>
>

-- 
You received this message because you are subscribed to the Google
Groups "Heroku" group.

To unsubscribe from this group, send email to
heroku+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/heroku?hl=en_US?hl=en


Re: Copying DB data from Heroku Postgresql to local server

2013-01-21 Thread Peter van Hardenberg
pgbackups makes completely standard dumps, but you could use a local
pg_dump to prove equivalency. Doing a restore sounds like it's dropping and
recreating all the tables. Perhaps there's some kind of magic that makes
the migrations work which doesn't get triggered in your dump/restore case.


On Mon, Jan 21, 2013 at 11:44 AM, Nickolay Kolev  wrote:

> I don't think this is it. Even if there are no schema changes, the same
> behaviour can be observed. Actually Rails *will* pick up schema changes
> (e.g. as introduced by migrations) when running in development mode.
>
> I have only seen this with Postgres and only when loading a dump. If true
> for all dumps or only the ones created by pgbackups I am not sure.
>
> --
> You received this message because you are subscribed to the Google
> Groups "Heroku" group.
>
> To unsubscribe from this group, send email to
> heroku+unsubscr...@googlegroups.com
> For more options, visit this group at
> http://groups.google.com/group/heroku?hl=en_US?hl=en
>

-- 
You received this message because you are subscribed to the Google
Groups "Heroku" group.

To unsubscribe from this group, send email to
heroku+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/heroku?hl=en_US?hl=en


Re: Copying DB data from Heroku Postgresql to local server

2013-01-21 Thread Nickolay Kolev
I don't think this is it. Even if there are no schema changes, the same 
behaviour can be observed. Actually Rails *will* pick up schema changes 
(e.g. as introduced by migrations) when running in development mode.

I have only seen this with Postgres and only when loading a dump. If true 
for all dumps or only the ones created by pgbackups I am not sure.

-- 
You received this message because you are subscribed to the Google
Groups "Heroku" group.

To unsubscribe from this group, send email to
heroku+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/heroku?hl=en_US?hl=en


Re: Copying DB data from Heroku Postgresql to local server

2013-01-21 Thread Peter van Hardenberg
Hm - that's an interesting one. I've never seen Rails pick up a schema
change without a restart before. I suspect the complexity cost and overhead
of monitoring the schema would outweigh the benefits of saving a process
restart. That said, it's weird that even if you recreate tables with the
same name they don't get picked up. You might try opening a bug report with
a small reproducible test case.

Let us know if you figure it out. Definitely a strange one.


On Mon, Jan 21, 2013 at 3:12 AM, Nickolay Kolev  wrote:

> Using `heroku db:pull` works but takes a rather long time. I decided to
> try using pg_restore like this:
>
> heroku pgbackups:capture --expire
> curl -o latest.dump `heroku pgbackups:url`
> pg_restore --clean --no-acl --no-owner -h localhost -U my_username -d
> local_db_name latest.dump
>
> This works as expected, i.e. the correct data is retrieved and it is
> faster than using taps.
>
> However, if I have a local Rails process which is connected to the local
> DB running when I import the production data, it starts reporting PG errors
> like
>
> PG::Error: ERROR: relation "xxx" does not exist
>
> After a restart of the Rails process, the errors disappear. Why is the
> Rails connection seeing that the relations are being dropped by importing
> the dump, but not seeing that they are being recreated and populated?
>
> Any ideas on what might be going on and how to fix it?
>
> Many thanks in advance!
>
> --
> You received this message because you are subscribed to the Google
> Groups "Heroku" group.
>
> To unsubscribe from this group, send email to
> heroku+unsubscr...@googlegroups.com
> For more options, visit this group at
> http://groups.google.com/group/heroku?hl=en_US?hl=en
>

-- 
You received this message because you are subscribed to the Google
Groups "Heroku" group.

To unsubscribe from this group, send email to
heroku+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/heroku?hl=en_US?hl=en


Copying DB data from Heroku Postgresql to local server

2013-01-21 Thread Nickolay Kolev
Using `heroku db:pull` works but takes a rather long time. I decided to try 
using pg_restore like this:

heroku pgbackups:capture --expire
curl -o latest.dump `heroku pgbackups:url`
pg_restore --clean --no-acl --no-owner -h localhost -U my_username -d 
local_db_name latest.dump

This works as expected, i.e. the correct data is retrieved and it is faster 
than using taps.

However, if I have a local Rails process which is connected to the local DB 
running when I import the production data, it starts reporting PG errors 
like

PG::Error: ERROR: relation "xxx" does not exist

After a restart of the Rails process, the errors disappear. Why is the 
Rails connection seeing that the relations are being dropped by importing 
the dump, but not seeing that they are being recreated and populated?

Any ideas on what might be going on and how to fix it?

Many thanks in advance!

-- 
You received this message because you are subscribed to the Google
Groups "Heroku" group.

To unsubscribe from this group, send email to
heroku+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/heroku?hl=en_US?hl=en