[twitter-dev] Re: 404 Errors on friends and followers using cursors

2010-01-09 Thread bear
I am now seeing this on some of my own accounts - has any movement or
fix been applied?

here is the url i'm trying:

curl http://twitter.com/statuses/friends/codebear.json   -- returns
[]
curl http://twitter.com/statuses/friends/manta.json   -- returns
{request:/statuses/friends/manta.json,error:Not found}



On Dec 28 2009, 2:42 pm, Mageuzi mage...@gmail.com wrote:
 Sorry to keep bringing this up, but this is still causing problems for
 me.  Is there any follow-up as to what the issue is?  Thanks in
 advance.

 On Dec 22, 10:06 pm, Mageuzi mage...@gmail.com wrote:



  Is there an update to the status of this issue? A user of my program
  reported a problem that ended up being this.  While trying to iterate
  through:http://api.twitter.com/1/statuses/friends/oevl.xml
  Cursor 1274505087418535016 returned fine and contained a next_cursor
  value of 1267920196862230269.  That value returned a 404.

  On Dec 8, 1:32 pm, Ammo Collector binhqtra...@gmail.com wrote:

   If you get the following URLs and continue to using the next_cursor,
   you receive incorrect 404s:

  http://twitter.com/statuses/friends/debra_bee.xml?cursor=130554434315...

   Any ideas?


[twitter-dev] Internal Server Error 500 on using the http://twitter.com/account/update_profile_background_image.format API

2010-01-09 Thread Vikram
Hi All,


I am trying to use this API using OAuth from my C++ .NET client.

How can get additional information about this error so that I can try
to fix it.

this-generateSignature();
WebRequest^ myRequest = WebRequest::Create(http://twitter.com/
account/update_profile_background_image.xml);
myRequest-Method = POST;
String^ boundary = this-CreateBoundary();
myRequest-ContentType = multipart/form-data; boundary= + boundary;
Encoding^ encoding = Encoding::ASCII;
String^ requestString = L--+boundary+L\r\n+LContent-Disposition:
form-data; name=\image\; filename=\test.JPG\  + L\r\n+LContent-
Type: image/jpg+L\r\n\r\n;
Stream^ requestStream = myRequest-GetRequestStream();
requestStream-Write(encoding-GetBytes(requestString),0,encoding-
GetBytes(requestString)-Length);
FileInfo^ file = gcnew FileInfo(LC:/Documents and Settings/vikramp/
My Documents/My Pictures/Picasa Exports/Picasa Export/test.JPG);
FileStream^ myImage = file-OpenRead();
arrayByte^ ByteArray;
if(myImage-CanRead)
{
ByteArray = gcnew arrayByte(safe_castint(myImage-Length));
myImage-Read( ByteArray, 0,safe_castint(myImage-Length));

requestStream-Write(ByteArray,0,safe_castint(myImage-Length));
}
requestStream-Write( encoding-GetBytes(L\r\n-- + boundary +
L--),0,encoding-GetBytes(L-- + boundary + L--)-Length);
requestStream-Close();
myImage-Close();
Stream^ data = myRequest-GetResponse()-GetResponseStream();

Just in case some one needed the code.

All I am doing here signing the request with the default parameters
(image parameter no included).

The I write the multipart data.

Please help me out or direct me to a link which gives a information
about using the API.


[twitter-dev] Please Help !!! How do i build OAuth based request for the update_profile_background_image.format API

2010-01-09 Thread Vikram
Hi All,


Please let me know what HTTP parameters need to be included for this
API.

Should the 'image' parameter considered for the OAuth signature base?

How should value for the image parameter be populated?

Should be the byte array of the image file or something else?

Please help me out. If possible give me an example request.





[twitter-dev] Is there a way to auto-update my status?

2010-01-09 Thread jpatterson
I have a WordPress blog to which I and other writers add new posts
frequently. After I (or my writers) post a new post, I would like my
status to automatically be updated with the new post. I've looked
throught the RESTful API and it looks like there is a status update
call, but it looks like I would have to create a Twitter application
to make it work (which seems a bit extreme to me). Is there a way to
simply pass twitter some authentication data and my new status and
have it automatically update my status? Thanks.


Re: [twitter-dev] Is there a way to auto-update my status?

2010-01-09 Thread Lukas Müller

You can auth. via BasicAuth. Its really simple.
No need to create an App.

Am 09.01.2010 um 17:57 schrieb jpatterson je...@squarecompass.com:


I have a WordPress blog to which I and other writers add new posts
frequently. After I (or my writers) post a new post, I would like my
status to automatically be updated with the new post. I've looked
throught the RESTful API and it looks like there is a status update
call, but it looks like I would have to create a Twitter application
to make it work (which seems a bit extreme to me). Is there a way to
simply pass twitter some authentication data and my new status and
have it automatically update my status? Thanks.


Re: [twitter-dev] Is there a way to auto-update my status?

2010-01-09 Thread Abraham Williams
After a few seconds of Googling I cam across this:
http://blog.victoriac.net/blog/twitter-updater

I'm sure there are a number of other plugins on
http://wordpress.org/extend/plugins/

On Sat, Jan 9, 2010 at 11:34, Lukas Müller webmas...@muellerlukas.dewrote:

 You can auth. via BasicAuth. Its really simple.
 No need to create an App.

 Am 09.01.2010 um 17:57 schrieb jpatterson je...@squarecompass.com:


  I have a WordPress blog to which I and other writers add new posts
 frequently. After I (or my writers) post a new post, I would like my
 status to automatically be updated with the new post. I've looked
 throught the RESTful API and it looks like there is a status update
 call, but it looks like I would have to create a Twitter application
 to make it work (which seems a bit extreme to me). Is there a way to
 simply pass twitter some authentication data and my new status and
 have it automatically update my status? Thanks.




-- 
Abraham Williams | #doit | http://hashtagdoit.com
Project | Intersect | http://intersect.labs.poseurtech.com
Hacker | http://abrah.am | http://twitter.com/abraham
This email is: [ ] shareable [x] ask first [ ] private.
Sent from Madison, WI, United States


[twitter-dev] Streaming API - AND between filter keywords

2010-01-09 Thread Amitab
Hi folks,

Is there a way by which I can get streaming results tracking a
combination of words. For example, is it possible to get streaming
results which track the keyword San Francisco i.e, San AND
Francisco. I could track San OR Francisco and then filter out for
San AND  Francisco but the results for San are very huge.

/Amitabh

Follow Twaller @mytwaller



[twitter-dev] Re: Twitter with Google Visualization

2010-01-09 Thread Kidd
HI Peter,

Ok, so what I understand you saying is that Twtiter only keeps 7 days
or 3200 results available per person?  So if I want trending over time
(more than 7days) I'm going to have to call that data and then store
it in a DB?

Right now I am dabbling in Python as a way to retrieve, parse and
write data.

thanks
jason

On Jan 7, 4:52 pm, Peter Denton petermden...@gmail.com wrote:
 Hi Kidd
 Main reason to localize the data is for user experience.
 If twitter search slows down, you may have page loads waiting for the
 content you need. Also, you will get only 3200 results, or a historical
 snapshot of 7 days from a query, so you run the risk of losing data outside.
 It all depends on what data you need for how long.

 Now, if twitter search data on the fly works good, you basically need to

 1.  retrieve the data from twitter search (probably json) - I use jQuery so
 it would be something like thishttp://docs.jquery.com/Ajax/jQuery.ajax

 2. parse the response result, convert it to proper JSON google
 visualizations wants for consumption to create a
 google.visualization.DataTable

 3. create a view of the data table, specifying the information you want to
 display on the graph

 4. create the visualization (areaImageChart, annotatedTimeline, etc)

 Here is an example from Google where the JSON is hardcoded, but aside from
 getting and parsing the data from twitter, this should show you what you
 need.http://www.mail-archive.com/google-visualization-...@googlegroups.com...

 Cheers
 Peter


[twitter-dev] Re: Best way to pull/cache location based search results?

2010-01-09 Thread Amitab
Twaller.com is a service which categorizes location based tweets
particularly useful to travelers. You can check us out at www.twaller.com.
We search tweets based on some combinations of keywords and then
filter them out using language processing algorithms.

If you would like access to the data using a Web API, do let us know.

Amitab

follow Twaller @mytwaller



On Jan 8, 7:49 am, @epc epcoste...@gmail.com wrote:
 On Jan 8, 9:29 am, GeorgeMedia georgeme...@gmail.com wrote:

  No one?

 I think you would be better off consuming the firehose, geocode the
 tweets yourself, and throw away any that aren’t in regions you care
 about, caching the rest for a period of time.

 The thing to remember about geocoding of tweets is that until very
 recently the geocoding was solely by the location field in a user’s
 profile.  True geocoding of individual tweets is very recent and
 depends on the user enabling geo coding, and on the user agent posting
 the lat/lon with the tweet.  So the firehose *does* contain the geo
 field, it's just mostly empty because most clients don’t populate it
 yet.  So if the geo field is empty you’d have to geocode based on
 the location field which is a bit of a hairball and may contain any
 data up to 30 bytes.

 Alternately, do the cron job thing but enlarge the regions you’re
 searching on (search on the top N cities or metros for example, not
 200,000 coordinates).  Cache the data, and accept that it won’t be
 absolutely up to date (it’s already lost a lot of precision since the
 location field is completely arbitrary and even if it is a city or
 lat/lon pair, does not necessarily represent where the twitter user
 was at that moment in time).

 --
 -ed costello


Re: [twitter-dev] Streaming API - AND between filter keywords

2010-01-09 Thread Mark McBride
Currently no.  What I would do is search for Francisco (a much rarer
term), and then manually check for San Francisco on your end.

   ---Mark

http://twitter.com/mccv



On Sat, Jan 9, 2010 at 2:32 PM, Amitab hiamita...@gmail.com wrote:
 Hi folks,

 Is there a way by which I can get streaming results tracking a
 combination of words. For example, is it possible to get streaming
 results which track the keyword San Francisco i.e, San AND
 Francisco. I could track San OR Francisco and then filter out for
 San AND  Francisco but the results for San are very huge.

 /Amitabh

 Follow Twaller @mytwaller




Re: [twitter-dev] Re: Twitter with Google Visualization

2010-01-09 Thread Mark McBride
Twitter search keeps a limited amount of data (limited by time, fairly
short window).
The tweets however are kept indefinitely.  Currently we only support
accessing the last 3200 of them via the web and API

   ---Mark

http://twitter.com/mccv



On Sat, Jan 9, 2010 at 3:32 PM, Kidd jva...@gmail.com wrote:
 HI Peter,

 Ok, so what I understand you saying is that Twtiter only keeps 7 days
 or 3200 results available per person?  So if I want trending over time
 (more than 7days) I'm going to have to call that data and then store
 it in a DB?

 Right now I am dabbling in Python as a way to retrieve, parse and
 write data.

 thanks
 jason

 On Jan 7, 4:52 pm, Peter Denton petermden...@gmail.com wrote:
 Hi Kidd
 Main reason to localize the data is for user experience.
 If twitter search slows down, you may have page loads waiting for the
 content you need. Also, you will get only 3200 results, or a historical
 snapshot of 7 days from a query, so you run the risk of losing data outside.
 It all depends on what data you need for how long.

 Now, if twitter search data on the fly works good, you basically need to

 1.  retrieve the data from twitter search (probably json) - I use jQuery so
 it would be something like thishttp://docs.jquery.com/Ajax/jQuery.ajax

 2. parse the response result, convert it to proper JSON google
 visualizations wants for consumption to create a
 google.visualization.DataTable

 3. create a view of the data table, specifying the information you want to
 display on the graph

 4. create the visualization (areaImageChart, annotatedTimeline, etc)

 Here is an example from Google where the JSON is hardcoded, but aside from
 getting and parsing the data from twitter, this should show you what you
 need.http://www.mail-archive.com/google-visualization-...@googlegroups.com...

 Cheers
 Peter



Re: [twitter-dev] Re: Best way to pull/cache location based search results?

2010-01-09 Thread Mark McBride
Sorry for the delay on this... but when ecp said sounds like a
reasonable approach.  Note that the streaming API does support
bounding box filters now.  However they only work off the geo
element, not the location field.

   ---Mark

http://twitter.com/mccv



On Sat, Jan 9, 2010 at 4:17 PM, Amitab hiamita...@gmail.com wrote:
 Twaller.com is a service which categorizes location based tweets
 particularly useful to travelers. You can check us out at www.twaller.com.
 We search tweets based on some combinations of keywords and then
 filter them out using language processing algorithms.

 If you would like access to the data using a Web API, do let us know.

 Amitab

 follow Twaller @mytwaller



 On Jan 8, 7:49 am, @epc epcoste...@gmail.com wrote:
 On Jan 8, 9:29 am, GeorgeMedia georgeme...@gmail.com wrote:

  No one?

 I think you would be better off consuming the firehose, geocode the
 tweets yourself, and throw away any that aren’t in regions you care
 about, caching the rest for a period of time.

 The thing to remember about geocoding of tweets is that until very
 recently the geocoding was solely by the location field in a user’s
 profile.  True geocoding of individual tweets is very recent and
 depends on the user enabling geo coding, and on the user agent posting
 the lat/lon with the tweet.  So the firehose *does* contain the geo
 field, it's just mostly empty because most clients don’t populate it
 yet.  So if the geo field is empty you’d have to geocode based on
 the location field which is a bit of a hairball and may contain any
 data up to 30 bytes.

 Alternately, do the cron job thing but enlarge the regions you’re
 searching on (search on the top N cities or metros for example, not
 200,000 coordinates).  Cache the data, and accept that it won’t be
 absolutely up to date (it’s already lost a lot of precision since the
 location field is completely arbitrary and even if it is a city or
 lat/lon pair, does not necessarily represent where the twitter user
 was at that moment in time).

 --
 -ed costello



[twitter-dev] Possible to get verified account status via API?

2010-01-09 Thread Amir Michail
Hello,

This would be useful to create celebrity leaderboard(s) in a game.

Is it possible to get this information via the API?

Amir


Re: [twitter-dev] Possible to get verified account status via API?

2010-01-09 Thread John Meyer

On 1/9/2010 6:26 PM, Amir Michail wrote:

Hello,

This would be useful to create celebrity leaderboard(s) in a game.

Is it possible to get this information via the API?

Amir

   



http://apiwiki.twitter.com/Twitter-REST-API-Method%3A-users%C2%A0show

verified attribute

HTH



[twitter-dev] Re: Streaming API - AND between filter keywords

2010-01-09 Thread Amitab
Thanks Mark. That helps a lot.

On Jan 9, 4:44 pm, Mark McBride mmcbr...@twitter.com wrote:
 Currently no.  What I would do is search for Francisco (a much rarer
 term), and then manually check for San Francisco on your end.

    ---Mark

 http://twitter.com/mccv



 On Sat, Jan 9, 2010 at 2:32 PM, Amitab hiamita...@gmail.com wrote:
  Hi folks,

  Is there a way by which I can get streaming results tracking a
  combination of words. For example, is it possible to get streaming
  results which track the keyword San Francisco i.e, San AND
  Francisco. I could track San OR Francisco and then filter out for
  San AND  Francisco but the results for San are very huge.

  /Amitabh

  Follow Twaller @mytwaller


[twitter-dev] Is there a different rate limit for social graph methods?

2010-01-09 Thread M. Edward (Ed) Borasky
Here's what I'm doing:

1. Checking the rate limit status. It returns the following:

remaining hits: 61, seconds to go: 3386, sleeping 55.5081967213115
seconds

2. Authorizing with oAuth, desktop PIN style via Firefox

Starting Firefox to authorize - enter PIN: oAuth completed
authorized: 1

3. Calling followers_ids. I get this:

Rate limit exceeded. Clients may not make more than 450 requests per
hour.

Huh?

I'm doing all this in Perl - haven't had a chance to look at the HTTP
stuff coming back yet. But is there another rate limit status call we
need to make to check this case?


Re: [twitter-dev] Is there a different rate limit for social graph methods?

2010-01-09 Thread Mark McBride
How are you authorizing when calling rate limit status?  Same OAuth credentials?

   ---Mark

http://twitter.com/mccv



On Sat, Jan 9, 2010 at 7:38 PM, M. Edward (Ed) Borasky zzn...@gmail.com wrote:
 Here's what I'm doing:

 1. Checking the rate limit status. It returns the following:

 remaining hits: 61, seconds to go: 3386, sleeping 55.5081967213115
 seconds

 2. Authorizing with oAuth, desktop PIN style via Firefox

 Starting Firefox to authorize - enter PIN: oAuth completed
 authorized: 1

 3. Calling followers_ids. I get this:

 Rate limit exceeded. Clients may not make more than 450 requests per
 hour.

 Huh?

 I'm doing all this in Perl - haven't had a chance to look at the HTTP
 stuff coming back yet. But is there another rate limit status call we
 need to make to check this case?



[twitter-dev] Re: Is there a different rate limit for social graph methods?

2010-01-09 Thread M. Edward (Ed) Borasky
Yeah ... oAuth first, then call rate limit status.

I see what's happening. I'm testing followers_ids on an account with
a huge number of followers (millions). I can get approximately 5000 a
cursor page, but at some point, the servers are saying, Hey - quit
doing that! and throwing an error:

Rate limit exceeded. Clients may not make more than 450 requests per
hour.

It's coming back with a 400 Bad Request. The Perl API library is
pretty good about giving me details - here's what it's saying:

x-ratelimit-limit = 450
x-ratelimit-remaining = 0
x-ratelimit-reset = 1263101958
x-ratelimit-class = api_identified

It looks like there is a limit of 450 requests of some kind, and once
I go over that, I'm shut out for an hour. Curiously enough, the
standard rate_limit_status operation is returning a constant 150
hits and an hour remaining in this sequence. My code thought it was
cool and just kept going. So it looks like there is a separate rate
limit for cursor pages inside the followers_id paging mechanism.

I'll know more in another hour. ;-)


On Jan 9, 7:51 pm, Mark McBride mmcbr...@twitter.com wrote:
 How are you authorizing when calling rate limit status?  Same OAuth 
 credentials?

    ---Mark

 http://twitter.com/mccv

 On Sat, Jan 9, 2010 at 7:38 PM, M. Edward (Ed) Borasky zzn...@gmail.com 
 wrote:

  Here's what I'm doing:

  1. Checking the rate limit status. It returns the following:

  remaining hits: 61, seconds to go: 3386, sleeping 55.5081967213115
  seconds

  2. Authorizing with oAuth, desktop PIN style via Firefox

  Starting Firefox to authorize - enter PIN: oAuth completed
  authorized: 1

  3. Calling followers_ids. I get this:

  Rate limit exceeded. Clients may not make more than 450 requests per
  hour.

  Huh?

  I'm doing all this in Perl - haven't had a chance to look at the HTTP
  stuff coming back yet. But is there another rate limit status call we
  need to make to check this case?


Re: [twitter-dev] Re: Is there a different rate limit for social graph methods?

2010-01-09 Thread Mark McBride
If you can post complete HTTP conversations of both successful and
failed calls (any sensitive info elided) that would be great.  If the
Perl library is trying to transparently get the entire social graph
you'll definitely get rate limited.

   ---Mark

http://twitter.com/mccv



On Sat, Jan 9, 2010 at 9:31 PM, M. Edward (Ed) Borasky zzn...@gmail.com wrote:
 Yeah ... oAuth first, then call rate limit status.

 I see what's happening. I'm testing followers_ids on an account with
 a huge number of followers (millions). I can get approximately 5000 a
 cursor page, but at some point, the servers are saying, Hey - quit
 doing that! and throwing an error:

 Rate limit exceeded. Clients may not make more than 450 requests per
 hour.

 It's coming back with a 400 Bad Request. The Perl API library is
 pretty good about giving me details - here's what it's saying:

 x-ratelimit-limit = 450
 x-ratelimit-remaining = 0
 x-ratelimit-reset = 1263101958
 x-ratelimit-class = api_identified

 It looks like there is a limit of 450 requests of some kind, and once
 I go over that, I'm shut out for an hour. Curiously enough, the
 standard rate_limit_status operation is returning a constant 150
 hits and an hour remaining in this sequence. My code thought it was
 cool and just kept going. So it looks like there is a separate rate
 limit for cursor pages inside the followers_id paging mechanism.

 I'll know more in another hour. ;-)


 On Jan 9, 7:51 pm, Mark McBride mmcbr...@twitter.com wrote:
 How are you authorizing when calling rate limit status?  Same OAuth 
 credentials?

    ---Mark

 http://twitter.com/mccv

 On Sat, Jan 9, 2010 at 7:38 PM, M. Edward (Ed) Borasky zzn...@gmail.com 
 wrote:

  Here's what I'm doing:

  1. Checking the rate limit status. It returns the following:

  remaining hits: 61, seconds to go: 3386, sleeping 55.5081967213115
  seconds

  2. Authorizing with oAuth, desktop PIN style via Firefox

  Starting Firefox to authorize - enter PIN: oAuth completed
  authorized: 1

  3. Calling followers_ids. I get this:

  Rate limit exceeded. Clients may not make more than 450 requests per
  hour.

  Huh?

  I'm doing all this in Perl - haven't had a chance to look at the HTTP
  stuff coming back yet. But is there another rate limit status call we
  need to make to check this case?



[twitter-dev] Re: Is there a different rate limit for social graph methods?

2010-01-09 Thread M. Edward (Ed) Borasky


On Jan 9, 9:59 pm, Mark McBride mmcbr...@twitter.com wrote:
 If you can post complete HTTP conversations of both successful and
 failed calls (any sensitive info elided) that would be great.  If the
 Perl library is trying to transparently get the entire social graph
 you'll definitely get rate limited.

It looks like nothing comes back until it fails. I'm running this with
Komodo and breakpoints, and on a successful call, the Perl library
only returns the requested array of IDs, the next cursor and the
previous cursor. I'm going to run this by the author of the Perl
library. It doesn't look like he's trying to get more than a page at a
time when you specify a cursor, but I don't think he's ever tested
something this big, so he wouldn't have run into it. I can post the
returned HTTP for a failed one, though. ;-)