Re: Killing Cron Process?

2011-01-11 Thread Zach Bailey
I tried Oren's suggestion about doing a heroku restart and that did not
kill/restart an already-running cron task. Same with uninstalling the cron
addon - that does not seem to kill the already-running cron process either.

Any other ideas? Maybe a heroku kill might be in order...

-Zach

On Wed, Dec 29, 2010 at 1:50 PM, Zach Bailey znbai...@gmail.com wrote:

 I have a long-running cron process as a result of an out of control task
 that I need to kill. Is it possible to do this via the heroku gem or console
 somehow?

 -Zach


-- 
You received this message because you are subscribed to the Google Groups 
Heroku group.
To post to this group, send email to her...@googlegroups.com.
To unsubscribe from this group, send email to 
heroku+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/heroku?hl=en.



Killing Cron Process?

2010-12-29 Thread Zach Bailey
I have a long-running cron process as a result of an out of control task
that I need to kill. Is it possible to do this via the heroku gem or console
somehow?

-Zach

-- 
You received this message because you are subscribed to the Google Groups 
Heroku group.
To post to this group, send email to her...@googlegroups.com.
To unsubscribe from this group, send email to 
heroku+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/heroku?hl=en.



Re: Storing logs in a remote server

2010-12-23 Thread Zach Bailey
 The advanced version of the Heroku logging add-on allows you to specify your 
own custom syslog endpoint:



http://addons.heroku.com/logging


I would recommend contacting someone at Heroku directly to gain access to that 
so you can capture all the data. This way, you could set up your own dedicated 
EC2 instance running syslog and capture all the data, allowing you to read and 
filter it as needed.


Cheers,
Zach


On Thursday, December 23, 2010 at 11:45 PM, Ming Yeow Ng wrote:

 Hi folks, 
 
 
 I am logging user activity at a far higher granularity than any of the 
 metrics software would allow.
 
 
 As such, i need to store the logs somewhere which does not require my laptop 
 to be on. Something like this:
 
 
 
  $ heroku logs --tail | grep what_i_want_to_log  store somewhere remote 
  
  
 
 I can imagine setting up heroku on a amazon micro instance, but that seems 
 like overkill to me. 
 
 Anyone have similar experience with this? 
 
 
 M 
 
  -- 
  You received this message because you are subscribed to the Google Groups 
 Heroku group.
  To post to this group, send email to her...@googlegroups.com.
  To unsubscribe from this group, send email to 
 heroku+unsubscr...@googlegroups.com.
  For more options, visit this group at 
 http://groups.google.com/group/heroku?hl=en.
 
 
 
 


-- 
You received this message because you are subscribed to the Google Groups 
Heroku group.
To post to this group, send email to her...@googlegroups.com.
To unsubscribe from this group, send email to 
heroku+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/heroku?hl=en.



Re: logs:cron shows no output

2010-12-13 Thread Zach Bailey

 Any updates on this? I'm now having an error in one of my cron tasks and the 
inability to view the logs as a first line of debugging is causing me a major 
headache. It's now been two weeks that this has been broken.


What gives? 

-Zach


On Friday, December 10, 2010 at 1:14 PM, Zach Bailey wrote:

 
  I'm also seeing this, I have hourly cron installed which should produce 
 logging (via puts) every time it runs.
 
 
 When I run heroku logs:cron the last timestamp I see is:
 
 
 Tue Nov 30 14:57:20 -0800 2010
 
 
 I know my cron is running because the tasks it takes care of are still being 
 handled correctly.
 
 
 What gives, heroku peeps? I sure would like to be able to monitor my cron to 
 make sure everything is running 100%
 
 -Zach
 
 
 On Thursday, December 9, 2010 at 6:00 AM, themire wrote:
 
  Not sure if it's related but my cron logs haven't updated since
  November 30th though it seems to be running every night. Heroku are
  experimenting with a new beta logging service so it might be related
  to that.
  
  -- 
  You received this message because you are subscribed to the Google Groups 
  Heroku group.
  To post to this group, send email to her...@googlegroups.com.
  To unsubscribe from this group, send email to 
  heroku+unsubscr...@googlegroups.com.
  For more options, visit this group at 
  http://groups.google.com/group/heroku?hl=en.
  
  
  
  
  
 
 
 
 
 
 


-- 
You received this message because you are subscribed to the Google Groups 
Heroku group.
To post to this group, send email to her...@googlegroups.com.
To unsubscribe from this group, send email to 
heroku+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/heroku?hl=en.



Re: Can't seem to install the Taps gem

2010-12-10 Thread Zach Bailey

 If you're using rvm, you should never install gems using sudo, as rvm keeps 
its environment tied to your regular user account.


Try gem install taps instead. Then, it should show up when you run gems list

HTH,
Zach


On Thursday, December 9, 2010 at 7:47 AM, Jimmy wrote:

 I'm trying to use Heroku's Taps gem to get my database from their
 server.
 
 When I run $heroku db:pull it says I need to install the Taps gem
 using the command:
 
 sudo gem install taps
 
 I run this command, and as expected, Taps says it has installed (1
 gem installed). I'm able to run the Gem update taps command without
 an error after installing.
 
 However, $gem list does not show Taps as installed, and I cannot see
 it in the gem folder at
 
 /Users/username/.rvm/gems/ree-1.8.7-2010...@appname
 
 Needless to say, I cannot run the Heroku db:pull command because of
 this.
 
 I am running RVM, I don't know if this are relevant.
 
 I'm sure I'm doing something simple wrong...
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 Heroku group.
 To post to this group, send email to her...@googlegroups.com.
 To unsubscribe from this group, send email to 
 heroku+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/heroku?hl=en.
 
 
 
 


-- 
You received this message because you are subscribed to the Google Groups 
Heroku group.
To post to this group, send email to her...@googlegroups.com.
To unsubscribe from this group, send email to 
heroku+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/heroku?hl=en.



Re: Best way to DB import 1M+ rows?

2010-12-10 Thread Zach Bailey

 Thanks John, that's a great suggestion. Unfortunately it's looking like it 
will take about 7.5 hours to import 3.12M rows:


1 tables, 3,123,800 records
companies:  1% | | ETA: 07:25:34


I'm wondering if there's a more expedient route... in the past I've used the 
postgres COPY command [1] to do bulk imports of large data sets quickly, but 
that requires that the server be able to read a file off the server's local 
filesystem. I don't suppose that's feasible given how the Heroku platform 
works, but would love to be pleasantly surprised :)


Anyone from Heroku able to pipe up and offer any other possible suggestions? 
Just to restate the problem, I have a single table with about 3.12M records 
that I'm wanting to transfer from a local DB to my remote Heroku DB without 
touching the other Heroku app data. It's ok if the table gets blown away on the 
Heroku side as it has nothing in it (new model I just added).

Happy Friday,
Zach



[1] http://www.postgresql.org/docs/8.4/interactive/sql-copy.html

On Thursday, December 9, 2010 at 4:36 AM, johnb wrote:

 If it's just a single table and you have it in a db locally then db:push 
 --tables tablename would get it up to heroku - but this will replace the 
 contents of the remote table with the local table and not append to it. If 
 the application is live you could put it into maintenance mode, db:pull 
 --tables tablename append your rows to it and then push the table back and 
 put the app live...
 
 
 perhaps?
 
 
 John.
 
  -- 
  You received this message because you are subscribed to the Google Groups 
 Heroku group.
  To post to this group, send email to her...@googlegroups.com.
  To unsubscribe from this group, send email to 
 heroku+unsubscr...@googlegroups.com.
  For more options, visit this group at 
 http://groups.google.com/group/heroku?hl=en.
 
 
 
 


-- 
You received this message because you are subscribed to the Google Groups 
Heroku group.
To post to this group, send email to her...@googlegroups.com.
To unsubscribe from this group, send email to 
heroku+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/heroku?hl=en.



Re: Best way to DB import 1M+ rows?

2010-12-10 Thread Zach Bailey

 Thank you David and Peter for your awesome ideas. Reading back over the 
postgres COPY command docs [1] it doesn't look like that would ever be a 
feasible solution given the following stipulation:


Files named in a COPY command are read or written directly by the server, not 
by the client application. Therefore, they must reside on or be accessible to 
the database server machine, not the client. They must be accessible to and 
readable or writable by the PostgreSQL user (the user ID the server runs as), 
not the client. COPY naming a file is only allowed to database superusers, 
since it allows reading or writing any file that the server has privileges to 
access.


So, it looks like this is something Heroku would have to wrap into an 
abstracted administrative function down the line, if they ever did it at all, 
due to the super-user access requirement.


Given that, the remaining options consist of a.) restoring a partial backup or 
b.) doing raw inserts.


a.) seems like it would be possible using a dedicated db + heroku pg:ingress + 
pg_restore -a -t tablename


b.) is of course possible via a variety of methods (rake task, REST API, taps, 
etc) but unfortunately rather slow as there is a lot of overhead in doing 
single inserts (when compared to writing table structure directly, etc.)


I'm thinking I'll give option (a) a try and see how it goes.

-Zach



[1] http://www.postgresql.org/docs/8.4/interactive/sql-copy.html

On Friday, December 10, 2010 at 3:12 PM, David Dollar wrote:

 Another possible solution would be this:
 
 
 Upload your data in CSV/TSV/whatever form to S3. Write a rake task that does 
 the following:
 
 
 * download from S3 to RAILS_ROOT/tmp
 * use the psql command line tool (it's on our dyno grid) or one of the 
 ActiveRecord bulk import extensions to read the file and import to your 
 database
 
 
 Then you can run it with heroku rake my_import_task
 
 
 If this is going to be a regular process, you'll likely want to wrap all of 
 this up as something you can run from a worker using DJ or its' ilk.
 
 On Dec 10, 2010, at 10:51 AM, Zach Bailey wrote:
 
 
  
  
   Thanks John, that's a great suggestion. Unfortunately it's looking like it 
  will take about 7.5 hours to import 3.12M rows:
  
  
  1 tables, 3,123,800 records
  companies:  1% | | ETA: 07:25:34
  
  
  I'm wondering if there's a more expedient route... in the past I've used 
  the postgres COPY command [1] to do bulk imports of large data sets 
  quickly, but that requires that the server be able to read a file off the 
  server's local filesystem. I don't suppose that's feasible given how the 
  Heroku platform works, but would love to be pleasantly surprised :)
  
  
  Anyone from Heroku able to pipe up and offer any other possible 
  suggestions? Just to restate the problem, I have a single table with about 
  3.12M records that I'm wanting to transfer from a local DB to my remote 
  Heroku DB without touching the other Heroku app data. It's ok if the table 
  gets blown away on the Heroku side as it has nothing in it (new model I 
  just added).
  
  Happy Friday,
  Zach
  
  
  
  [1] http://www.postgresql.org/docs/8.4/interactive/sql-copy.html
  
  On Thursday, December 9, 2010 at 4:36 AM, johnb wrote:
  
   If it's just a single table and you have it in a db locally then db:push 
   --tables tablename would get it up to heroku - but this will replace 
   the contents of the remote table with the local table and not append to 
   it. If the application is live you could put it into maintenance mode, 
   db:pull --tables tablename append your rows to it and then push the 
   table back and put the app live...
   
   
   perhaps?
   
   
   John.
   
   
-- 
You received this message because you are subscribed to the Google 
   Groups Heroku group.
To post to this group, send email to her...@googlegroups.com.
To unsubscribe from this group, send email to 
   heroku+unsubscr...@googlegroups.com.
For more options, visit this group at 
   http://groups.google.com/group/heroku?hl=en.
   
   
   
   
   
  
  
  
  
   -- 
   You received this message because you are subscribed to the Google Groups 
  Heroku group.
   To post to this group, send email to her...@googlegroups.com.
   To unsubscribe from this group, send email to 
  heroku+unsubscr...@googlegroups.com.
   For more options, visit this group at 
  http://groups.google.com/group/heroku?hl=en.
  
  
 
 
 
 
  -- 
  You received this message because you are subscribed to the Google Groups 
 Heroku group.
  To post to this group, send email to her...@googlegroups.com.
  To unsubscribe from this group, send email to 
 heroku+unsubscr...@googlegroups.com.
  For more options, visit this group at 
 http://groups.google.com/group/heroku?hl=en.
 
 
 
 


-- 
You received this message because you are subscribed to the Google Groups 
Heroku group.
To post to this group, send email to her...@googlegroups.com.
To unsubscribe from this group

Best way to DB import 1M+ rows?

2010-12-08 Thread Zach Bailey
 I'm wondering if anyone has had experience importing a large amount of data 
into their heroku database and how they went about doing it. I'm staring down 
about 3-4M rows of (static) data that I'd like to get into our DB on Heroku.


If I had my own database locally I would probably use the COPY command as it is 
blazing fast. Inserting a row at a time from ruby code or going through an 
exposed API is looking like it will take multiple days to do (without spending 
extra money to crank up more dynos and buy a larger DB option)


Another option I thought of would be to figure out a way to load the data in a 
local db then dump only that table and restore it into the heroku db as a 
partial backup, but I didn't see any facility for that.


Has anyone done anything like this or think of a good way for doing it?


Thanks,
Zach

-- 
You received this message because you are subscribed to the Google Groups 
Heroku group.
To post to this group, send email to her...@googlegroups.com.
To unsubscribe from this group, send email to 
heroku+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/heroku?hl=en.



Determining Database Size?

2010-12-07 Thread Zach Bailey
 I was looking for a way via the web interface or command-line to determine the 
size of my database. Could anyone point me in the right direction? I've dug 
around and searched but not having much luck.


For the Heroku folks - a point of feedback - the place that I would expect to 
see my database size would be in the same place along with my other data in the 
General Info section.


Thanks,
Zach

-- 
You received this message because you are subscribed to the Google Groups 
Heroku group.
To post to this group, send email to her...@googlegroups.com.
To unsubscribe from this group, send email to 
heroku+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/heroku?hl=en.