Re: [twitter-dev] Re: issues with retweets and API

2010-01-04 Thread srikanth reddy
home_timeline also includes both.
For user retweeted status i would just check if(status[i].retweeted_status
!= null and status[i].user.screen_name == currentuser)
But the problem comes when you have friends redundant status i.e
status[i].retweeted_status != null and status[i].user.screen_name !=
currentuser. This friend's status appears in your home time line even after
you retweet. If you try to retweet this it will throw an error as this has
already been retweeted by you.
To prevent this you have to manually disable friends 'retweets' appearing in
your home_timeline (this option is available in web only and this has to be
done for each and every user.Currently this is also not working)


Anyhow these issues are already reported here

http://code.google.com/p/twitter-api/issues/detail?id=1214

http://code.google.com/p/twitter-api/issues/detail?id=1274



On Mon, Jan 4, 2010 at 2:24 AM, John munz...@gmail.com wrote:

 I understood you since the beginning. It doesn't feel redundant to me,
 I'm pretty sure that is intended functionality.

 Even if they disappeared from retweeted by others there still needs
 to be a way to know if you can undo regular tweets you've retweeted
 since they don't include any retweeted information in the other
 timeline methods (home_timeline etc).


 On Jan 3, 11:40 am, srikanth reddy srikanth.yara...@gmail.com wrote:
  I am not sure i expressed it clearly. Pardon my lang
 
  They will only disappear if your friends undo.
 
   It is true that they will disappear if your friends undo. But my point
 is
  that they should also disappear(not instantly) from 'Retweets by Others'
  when you retweet them from 'Retweets by Others'  tab( coz it will be
 added
  to 'retweets by me' ) and keeping them in 'Retweets by Others' is just
  redundant. If you refresh your 'Retweets by Others' tab you will see the
  tweet as retweeted by you and your friend and you have the option of
 undoing
  it. But this undoing is possible only in web. From API point of view, if
  these statuses are removed from 'retweets by others'  the moment the user
  retweets them then undo is simple (just delete the status id obtained
 from
  the statuses/retweet response ). This type of undoing is done only
  instantly i.e u cannot undo if you refresh the tab.( retweeted_to_me' now
  does not include that status)
 
   This is true for other timeline methods as well. But if keeping this
  redundant data is intended then twitter has to make changes to the
 payload
  (i.e add the retweeted_by_me flag)  and provide destroy/retweet methods
 as
  suggested by you). Hope i am clear now.
 
  On Sun, Jan 3, 2010 at 10:14 AM, John munz...@gmail.com wrote:
   They will always remain even if you undo. They will only disappear if
   your friends undo.



[twitter-dev] Re: Something is technically wrong Response 500

2010-01-04 Thread quenotacom
Thank you for the due dilligence (lol)... Resolved finally ... it was
a problem with percent encode and utf 8 encode of text (status), i
used huevuniverse to check
every step and finally was solved ... still some chararacter are
strange (latin characters), but it is working enough well ...

you are invited to use my old asp www.quenota.com-twitter app.

quenotacom

On Dec 27 2009, 3:36 pm, quenotacom webmas...@quenota.com wrote:
 Thank you,

 Language ASP CLASSIC

 What end point :http://twitter.com/statuses/update.xml

 Parameters:

 get_twitter_url
 ('POST',twitter_url,oauth_key333,oauth_key333s,token_auth_var,token_secret_­var,
 'http://twitter.com', // scope
 'status', // name of the parameter
 mensaje // the text to be send
 )

 //
 //-­---
 //
 function get_twitter_url
 (accion,url_var,url_customer,key_secret_o,token333,token334,scope,oaparamet­ro,mensaje)
 {
 var url_var2 =''
 if (accion == 'POST')
 {
         if (mensaje != '')
         {
                 var msgtxt = PE(''+oaparametro +'=')+ PE(mensaje)
                 var msgtxt2 = oaparametro+'='+ PE(mensaje)+''
                 var DataToSend  = msgtxt2
         }}

 if (accion == 'GET')
 {
         if (mensaje != '')
         {
                 url_var2 =  '?'+oaparametro+'='+PE(mensaje)
         }
         var msgtxt = ''
         var msgtxt2 = ''
         var DataToSend = null}

 var nonce_o = nonce_rut()
 var ts_o        = ts_rut()

 var base334 =
         accion
 +       ''+PE(url_var)
 +       url_var2
 +       ''+PE('oauth_consumer_key=')       + PE(url_customer)
 +       PE('oauth_nonce=')         + (nonce_o)
 +       PE('oauth_signature_method=')      + PE('HMAC-SHA1')
 +       PE('oauth_timestamp=')     + PE(ts_o)
 +       PE('oauth_token=')         + PE(token333)
 +       PE('oauth_version=')               + PE('1.0')
 +       msgtxt

 var key_secret  = key_secret_o;
 var firma               = PE(str2rstr_utf8(key_secret))++ ((str2rstr_utf8
 (token334)));
 firma           = (b64_hmac_sha1(firma, base334))

 var auth333     = 'OAuth oauth_version='                        + '1.0, '
                 +'oauth_nonce='                           + nonce_o       
 +', '
                 +'oauth_timestamp='                               + ts_o  
 +', '
                 +'oauth_consumer_key='                    + url_customer+', 
 '
                 +'oauth_token='                           + (token333)    
 +', '
                 +'oauth_signature_method'                       + 
 '=HMAC-SHA1, '
                 +'oauth_signature='                               + 
 PE(firma)     +''
                 + msgtxt2
                 +'\r\n'

 var w_atom1 = upload_http_get('upload',accion,url_var+url_var2,
 DataToSend,auth333,scope,mensaje)

 if (w_atom1)
 {
         Response.ContentType = text/html}

 else
 {
         Response.Write('Sorry I cannot get the url required ')

 }
 }
     - what parameters are you passing in (if you are authenticating, what
     user are you authenticating as?)

 user = quenotacom or qnnnews

     - if you are using OAuth, and you're having problems with oauth, please
     provide us the entire oauth header that is being passed

 100% operative for GET access, no for POST (the same rutine) attached

     - what was the response that the twitter api returned to you?

 Something is technically wrong Response 500

 My sites arewww.quenota.us/www.quenota.com

 When I remove the status (text to send) i receive a response saying
 everything is ok but for status/update i need to send a parameter, so
 it looks like
 the oauth is ok, the same when i change something in the signature ...
 so it looks something in your side.

 Thanks, if you need more let me know

 On Dec 22, 7:14 pm, Raffi Krikorian ra...@twitter.com wrote:



  in general, for situations like this, please realise that in order for us to
  help you, we need as much information as you can give us so that we can try
  to replicate the problem and hopefully track it down.  what is really
  helpful is the following:

     - what end point are you calling? (which method are you calling?  e.g.
     status/update)
     - what parameters are you passing in (if you are authenticating, what
     user are you authenticating as?)
     - if you are using OAuth, and you're having problems with oauth, please
     provide us the entire oauth header that is being passed
     - what was the response that the twitter api returned to you?

  if you suspect the problem is with your IP address or where you are calling
  from, then also please provide the IP address your call is coming from, and
  the time (as accurate as you can) that you made the call that failed.

  thanks!

  On Tue, Dec 22, 2009 at 3:06 PM, Mark McBride mmcbr...@twitter.com wrote:
   It means an error occurred processing your request.  Without more
   details (for example the specific headers and URLs) it's difficult to
   answer in more 

Re: [twitter-dev] Removing Registered Application

2010-01-04 Thread Lukas Müller
http://twitter.com/apps - Click on the App - Edit - Delete-Button

It's so simple. ;-)

Greetings from Germany
Lukas


[twitter-dev] how could I know whether a status has been retweeted by me

2010-01-04 Thread hzqtc
I'm working on a twitter client and I want to take advantage of the
new official retweet. I just want to get the same thing like twitter
web retweeted by you. But the api doesn't contain any information
for whether a status is retweeted by me. So how can I get to know
this?
Thanks.


[twitter-dev] conversation chain

2010-01-04 Thread pallabi
is this possible in any way that one twitter conversation will contain
only one tweet ? I am trying to show the twitter conversation chain in
my application. but sometimes it happens that the conversation chain
contains only one tweet. Now my understanding is that if a
conversation contains only one tweet, then how could it be a
conversation . Is this any bug of my application or twitter sometimes
returns the conversation chain in this way ?


Re: [twitter-dev] conversation chain

2010-01-04 Thread Cameron Kaiser
 is this possible in any way that one twitter conversation will contain
 only one tweet ? I am trying to show the twitter conversation chain in
 my application. but sometimes it happens that the conversation chain
 contains only one tweet. Now my understanding is that if a
 conversation contains only one tweet, then how could it be a
 conversation . Is this any bug of my application or twitter sometimes
 returns the conversation chain in this way ?

If there is no reply-to information, you cannot further chain a tweet, even
if it is logically part of a conversation.

-- 
 personal: http://www.cameronkaiser.com/ --
  Cameron Kaiser * Floodgap Systems * www.floodgap.com * ckai...@floodgap.com
-- Test-tube babies shouldn't throw stones. ---


Re: [twitter-dev] how could I know whether a status has been retweeted by me

2010-01-04 Thread Cameron Kaiser
 I'm working on a twitter client and I want to take advantage of the
 new official retweet. I just want to get the same thing like twitter
 web retweeted by you. But the api doesn't contain any information
 for whether a status is retweeted by me. So how can I get to know
 this?

http://apiwiki.twitter.com/Twitter-REST-API-Method%3A-statuses-retweeted_by_me

-- 
 personal: http://www.cameronkaiser.com/ --
  Cameron Kaiser * Floodgap Systems * www.floodgap.com * ckai...@floodgap.com
-- BOND THEME NOW PLAYING: Live and Let Die -


Re: [twitter-dev] mentions not working as expected with new retweet functionality

2010-01-04 Thread Michael Ivey
Native retweets are a new type of tweet, and do not show up as mentions. You
can cobble together an approximation using search, retweets_of_me, and
retweets that will get close, but I don't think you can be 100% sure to
catch all of them.

 -- ivey


On Sat, Jan 2, 2010 at 5:35 PM, gstarsf gpalre...@gmail.com wrote:

 Hey guys,
 We are getting ready to launch an application that relies
 predominantly on the mentions api.  The idea is that anytime someone
 mentions our username in the form of @username, we get that
 information and process it in our service.

 The new retweet functionality somehow hides the retweets from the
 mentions api.  This doesn't make sense, and I was wondering if others
 are having this problem?  Is there a way to make sure the retweets are
 coming through the mentions api??

 Thanks.



[twitter-dev] Re: mentions not working as expected with new retweet functionality

2010-01-04 Thread Dewald Pretorius
There is already an enhancement request for this:

http://code.google.com/p/twitter-api/issues/detail?id=1218

On Jan 4, 10:12 am, Michael Ivey michael.i...@gmail.com wrote:
 Native retweets are a new type of tweet, and do not show up as mentions. You
 can cobble together an approximation using search, retweets_of_me, and
 retweets that will get close, but I don't think you can be 100% sure to
 catch all of them.

  -- ivey

 On Sat, Jan 2, 2010 at 5:35 PM, gstarsf gpalre...@gmail.com wrote:
  Hey guys,
  We are getting ready to launch an application that relies
  predominantly on the mentions api.  The idea is that anytime someone
  mentions our username in the form of @username, we get that
  information and process it in our service.

  The new retweet functionality somehow hides the retweets from the
  mentions api.  This doesn't make sense, and I was wondering if others
  are having this problem?  Is there a way to make sure the retweets are
  coming through the mentions api??

  Thanks.


[twitter-dev] No results in from search for various (verified?) accounts

2010-01-04 Thread Ben
I am getting a 200 status message but no results for these two
accounts, in both atom and json, and they both definitely have tweets:
http://search.twitter.com/search.atom?q=from:lindsaylohan
http://search.twitter.com/search.atom?q=from:chucktodd

I don't want to postulate, but I wonder if there's something similar
between these two (besides their looks, obviously) that makes them not
returning results... like perhaps verified and/or moved from a
different account name?

I looked around through the known issues and other threads, but didn't
see one that quite fit the same problem...


[twitter-dev] Re: issues with retweets and API

2010-01-04 Thread John
I've noticed that this is not always the case.

If I retweet an older tweet it shows up as a new tweet in
home_timeline. But if I retweet a tweet on the first page, call
home_timeline, it doesn't contain the retweet (only visible under
retweeted_by_me). Same occurrance happens on twitter.com but
twitter.com knows that i've retweeted (the difference between the API
and twitter).

I guess if it did show up that could also be a solution instead of
needing to add new flags as I suggested above.

Basically what you were saying in your first post:

Quote:
For your second point
I am not seeing the retweeted status in my home_timeline for some
reason...


On Jan 4, 1:06 am, srikanth reddy srikanth.yara...@gmail.com wrote:
 home_timeline also includes both.
 For user retweeted status i would just check if(status[i].retweeted_status
 != null and status[i].user.screen_name == currentuser)
 But the problem comes when you have friends redundant status i.e
 status[i].retweeted_status != null and status[i].user.screen_name !=
 currentuser. This friend's status appears in your home time line even after
 you retweet. If you try to retweet this it will throw an error as this has
 already been retweeted by you.
 To prevent this you have to manually disable friends 'retweets' appearing in
 your home_timeline (this option is available in web only and this has to be
 done for each and every user.Currently this is also not working)

 Anyhow these issues are already reported here

 http://code.google.com/p/twitter-api/issues/detail?id=1214

 http://code.google.com/p/twitter-api/issues/detail?id=1274

 On Mon, Jan 4, 2010 at 2:24 AM, John munz...@gmail.com wrote:
  I understood you since the beginning. It doesn't feel redundant to me,
  I'm pretty sure that is intended functionality.

  Even if they disappeared from retweeted by others there still needs
  to be a way to know if you can undo regular tweets you've retweeted
  since they don't include any retweeted information in the other
  timeline methods (home_timeline etc).

  On Jan 3, 11:40 am, srikanth reddy srikanth.yara...@gmail.com wrote:
   I am not sure i expressed it clearly. Pardon my lang

   They will only disappear if your friends undo.

    It is true that they will disappear if your friends undo. But my point
  is
   that they should also disappear(not instantly) from 'Retweets by Others'
   when you retweet them from 'Retweets by Others'  tab( coz it will be
  added
   to 'retweets by me' ) and keeping them in 'Retweets by Others' is just
   redundant. If you refresh your 'Retweets by Others' tab you will see the
   tweet as retweeted by you and your friend and you have the option of
  undoing
   it. But this undoing is possible only in web. From API point of view, if
   these statuses are removed from 'retweets by others'  the moment the user
   retweets them then undo is simple (just delete the status id obtained
  from
   the statuses/retweet response ). This type of undoing is done only
   instantly i.e u cannot undo if you refresh the tab.( retweeted_to_me' now
   does not include that status)

    This is true for other timeline methods as well. But if keeping this
   redundant data is intended then twitter has to make changes to the
  payload
   (i.e add the retweeted_by_me flag)  and provide destroy/retweet methods
  as
   suggested by you). Hope i am clear now.

   On Sun, Jan 3, 2010 at 10:14 AM, John munz...@gmail.com wrote:
They will always remain even if you undo. They will only disappear if
your friends undo.


[twitter-dev] Best way to pull/cache location based search results?

2010-01-04 Thread GeorgeMedia
Hello everyone!

I sure hope you can help. I am developing a web based app that
searches for location based tweets using the search API with json
results. I provide the longitude latitude via my own local database on
my server. Presently I'm limited to just the US and Canada but I'm
thinking I might be able to pull long/lat data dynamically from sites
like hostip.info or even maybe the yahooapi?

As you can imagine I am salivating in anticipation of the
trends.location api because parsing the text from the individual
tweets is a pain in the behind :)

But back on point... I'm using the search api exclusively because
that's the only way I know of to get localized tweets. I read the
streaming/firehose search api docs but it doesn't seem to support
geocode parameters. So my first question is, am I correct in assuming
that presently the only way to get location based tweets is using the
geocode parameter on the search api?

Secondly, I'm in a bit of a unique situation from what I can see form
other apps in that it's a bit tough to run a cron because there are
over 200,000 longitude/latitude sets in my database and that's just
the US and Canada!

So how could I possibly cache the data so that when a user queries
some random city so they can see local tweets for that city it's
available without doing an api call every time a user types in a city?
Because if my site gets popular that could become a problem very
quickly.

If I can think it I usually can code it but being new to using
twitter's API and limits I'm a bit stuck.


[twitter-dev] Re: bug with search using max_id

2010-01-04 Thread John
done

On Jan 3, 4:06 pm, Mark McBride mmcbr...@twitter.com wrote:
 John, can you open an issue on the code 
 tracker?http://code.google.com/p/twitter-api/issues
    ---Mark

 http://twitter.com/mccv

 On Sun, Jan 3, 2010 at 1:17 PM, John munz...@gmail.com wrote:
  another thing i've noticed is that search doesn't return as many
  records as when you do a search on twitter.com. You can verify using
  #tests. Returns about 5 records using the API while twitter.com
  returns about 20+. Could be related to the issue above.


Re: [twitter-dev] No results in from search for various (verified?) accounts

2010-01-04 Thread Mark McBride
This is an issue we're currently working to resolve.  Both of the
users you mentioned should be showing up in search results now.

   ---Mark

http://twitter.com/mccv



On Mon, Jan 4, 2010 at 10:37 AM, Ben magnetbo...@gmail.com wrote:
 I am getting a 200 status message but no results for these two
 accounts, in both atom and json, and they both definitely have tweets:
 http://search.twitter.com/search.atom?q=from:lindsaylohan
 http://search.twitter.com/search.atom?q=from:chucktodd

 I don't want to postulate, but I wonder if there's something similar
 between these two (besides their looks, obviously) that makes them not
 returning results... like perhaps verified and/or moved from a
 different account name?

 I looked around through the known issues and other threads, but didn't
 see one that quite fit the same problem...



[twitter-dev] Re: API whitelisted but still not working

2010-01-04 Thread bnonews
Anyone who can help me out here?

On 2 jan, 19:28, bnonews michaelvpop...@gmail.com wrote:
 Hi,

 In November I requested (from account @BreakingNews) to whitelist IP
 208.74.120.146 so we no longer have rate limits. That IP belongs 
 towww.bnonews.com. Until now we stayed well below the rate limit, but
 now we seem to go above it. We are using a number of scripts which
 check Twitter RSS feeds for updates, and immediately send it to an e-
 mail address. Every hour, between --.05 and --.30 of the hour
 (estimate) it will stop work. It started only after we added several
 more accounts for it to check. Did something go wrong and is the IP
 still having a limit?

 Thanks.

 This is the e-mail I received in November:

 Hi BNO News,
 Thanks for requesting to be on Twitter's API whitelist. We've approved
 your request!

 You should find any rate limits no longer apply to authenticated
 requests made by @BreakingNews.

 We've also whitelisted the following IP addresses:

 208.74.120.146

 This change should take effect within the next 48 hours.

 The Twitter API Team


[twitter-dev] Private list statuses are missing many tweets

2010-01-04 Thread Aaron Rankin
I have a private list and both on twitter.com and via GET list /
statuses, many tweets are missing. I confirmed this behavior by
comparing the most recent tweets of several users to their public
timelines.


Re: [twitter-dev] Re: Social Graph API: Legacy data format will be eliminated 1/11/2010

2010-01-04 Thread Marcel Molina
Dewald, it should be noted that, of course, not all 200 request responses
are created equal and just because pulling down a response body with
hundreds of thousands of ids succeeds, it doesn't mean it doesn't cause a
substantial strain on our system. We want to make developing against the API
as easy as is feasible but need to do so in a spirit of reasonable
compromise.

On Mon, Jan 4, 2010 at 5:59 PM, Dewald Pretorius dpr...@gmail.com wrote:

 Wilhelm,

 I want the API method to return the full social graph in as few API
 calls as possible.

 If your system can return up to X ids in one call without doing a 502
 freak-out, then continue to do so. For social graphs with X+n ids, we
 can use cursors.

 On Jan 4, 6:07 pm, Wilhelm Bierbaum wilh...@twitter.com wrote:
  Can everyone contribute their use case for this API method? I'm trying
  to fully understand the deficiencies of the cursor approach.
 
  Please don't include that cursors are slow or that they are charged
  against the rate limit, as those are known issues.
 
  Thanks.




-- 
Marcel Molina
Twitter Platform Team
http://twitter.com/noradio


[twitter-dev] Re: Social Graph API: Legacy data format will be eliminated 1/11/2010

2010-01-04 Thread PJB

I think that's like asking someone: why do you eat food? But don't say
because it tastes good or nourishes you, because we already know
that! ;)

You guys presumably set the 5000 ids per cursor limit by analyzing
your user base and noting that one could still obtain the social graph
for the vast majority of users with a single call.

But this is a bit misleading.  For analytics-based apps, who aim to do
near real-time analysis of relationships, the focus is typically on
consumer brands who have a far larger than average number of
relationships (e.g., 50k - 200k).

This means that those apps are neck-deep in cursor-based stuff, and
quickly realize the existing drawbacks, including, in order of
significance:

- Latency.  Fetching ids for a user with 3000 friends is comparable
between the two calls.  But as you increment past 5000, the speed
quickly peaks at a 5+x difference (I will include more benchmarks in a
short while).  For example, fetching 80,000 friends via the get-all
method takes on average 3 seconds; it takes, on average, 15 seconds
with cursors.

- Code complexity  elegance.  I would say that there is a 3x increase
in code lines to account for cursors, from retrying failed cursors, to
caching to account for cursor slowness, to UI changes to coddle
impatient users.

- Incomprehensibility.  While there are obviously very good reasons
from Twitter's perspective (performance) to the cursor based model,
there really is no apparent obvious benefit to API users for the ids
calls.  I would make the case that a large majority of API uses of the
ids calls need and require the entire social graph, not an incomplete
one.  After all, we need to know what new relationships exist, but
also what old relationships have failed.  To dole out the data in
drips and drabs is like serving a pint of beer in sippy cups.  That is
to say: most users need the entire social graph, so what is the use
case, from an API user's perspective, of NOT maintaining at least one
means to quickly, reliably, and efficiently get it in a single call?

- API Barriers to entry.  Most of the aforementioned arguments are
obviously from an API user's perspective, but there's something, too,
for Twitter to consider.  Namely, by increasing the complexity and
learning curve of particular API actions, you presumably further limit
the pool of developers who will engage with that API.  That's probably
a bad thing.

- Limits Twitter 2.0 app development.  This, again, speaks to issues
bearing on speed and complexity, but I think it is important.  The
first few apps in any given media or innovation invariably have to do
with basic functionality building blocks -- tweeting, following,
showing tweets.  But the next wave almost always has to do with
measurement and analysis.  By making such analysis more difficult, you
forestall the critically important ability for brands, and others, to
measure performance.

- API users have requested it.  Shouldn't, ultimately, the use case
for a particular API method simply be the fact that a number of API
developers have requested that it remain?


On Jan 4, 2:07 pm, Wilhelm Bierbaum wilh...@twitter.com wrote:
 Can everyone contribute their use case for this API method? I'm trying
 to fully understand the deficiencies of the cursor approach.

 Please don't include that cursors are slow or that they are charged
 against the rate limit, as those are known issues.

 Thanks.


Re: [twitter-dev] Re: Social Graph API: Legacy data format will be eliminated 1/11/2010

2010-01-04 Thread Jesse Stay
I'm just now noticing this (I agree - why was this being announced over the
holidays???) - this will make it near impossible to process large users.
 This is a *huge* change that just about kills any of the larger services
processing very large amounts of social graph data.  Please reconsider
allowing the all-in-one calls.  I don't want to have to explain to our users
with hundreds of thousands of followers why Twitter isn't letting us read
their Social Graph. (nor do I think Twitter wants us to)  I had a lot of
high hopes with Ryan Sarver's announcements last year of lifting limits, but
this is really discouraging.

Jesse

On Sun, Dec 27, 2009 at 7:29 PM, Dewald Pretorius dpr...@gmail.com wrote:

 What is being deprecated here is the old pagination method with the
 page parameter.

 As noted earlier, it is going to cause great pain if the API is going
 to assume a cursor of -1 if no cursor is specified, and hence enforce
 the use of cursors regardless of the size of the social graph.

 The API is currently comfortably returning social graphs smaller than
 200,000 members in one call. I very rarely get a 502 on social graphs
 of that size. It makes no sense to force us to make 40 API where 1 API
 call currently suffices and works. Those 40 API calls take between 40
 and 80 seconds to complete, as opposed to 1 to 2 seconds for the
 single API call. Multiply that by a few thousand Twitter accounts, and
 it adds hours of additional processing time, which is completely
 unnecessary, and will make getting through a large number of accounts
 virtually impossible.


 On Dec 27, 7:45 pm, Zac Bowling zbowl...@gmail.com wrote:
  I agree with the others to some extent. Although its a good signal to
 stop
  using something ASAP when something is depreciated, saying depreciated
 and
  not giving definite time-line on it's removal isn't good either. (Source
  params are deprecated but still work and don't have solid deprecation
 date,
  and I'm still going on using them because OAuth sucks for desktop/mobile
  situations still and would die with a 15 day heads up on removal).
 
  Also iPhone app devs using this API will would probably have a hard time
  squeezing a 15 day return on Apple right now.
 
  Zac Bowling
 
  On Sun, Dec 27, 2009 at 3:28 PM, Dewald Pretorius dpr...@gmail.com
 wrote:
   I agree 100%.
 
   Calls without the starting cursor of -1 must still return all
   followers as is currently the case.
 
   As a test I've set my system to use cursors on all calls. It inflates
   the processing time so much that things become completely unworkable.
 
   We can programmatically use cursors if showuser says that the person
   has more than a certain number of friends/followers. That's what I'm
   currently doing, and it works beautifully. So, please do not force us
   to use cursors on all calls.
 
   On Dec 24, 7:20 am, Aki yoru.fuku...@gmail.com wrote:
I agree with PJB. The previous announcements only said that the
pagination will be deprecated.
 
1.
 http://groups.google.com/group/twitter-api-announce/browse_thread/thr.
   ..
2.
 http://groups.google.com/group/twitter-api-announce/browse_thread/thr.
   ..
 
However, both of the announcements did not say that the API call
without page parameter to get
all IDs will be removed or replaced with cursor pagination.
The deprecation of this method is not being documented as PJB said.
 
On Dec 24, 5:00 pm, PJB pjbmancun...@gmail.com wrote:
 
 Why hasn't this been announced before?  Why does the API suggest
 something totally different?  At the very least, can you please
 hold
 off on deprecation of this until 2/11/2010?  This is a new API
 change.
 
 On Dec 23, 7:45 pm, Raffi Krikorian ra...@twitter.com wrote:
 
  yes - if you do not pass in cursors, then the API will behave as
   though you
  requested the first cursor.
 
   Willhelm:
 
   Your announcement is apparently expanding the changeover from
 page
   to
   cursor in new, unannounced ways??
 
   The API documentation page says: If the cursor parameter is
 not
   provided, all IDs are attempted to be returned, but large sets
 of
   IDs
   will likely fail with timeout errors.
 
   Yesterday you wrote: Starting soon, if you fail to pass a
 cursor,
   the
   data returned will be that of the first cursor (-1) and the
   next_cursor and previous_cursor elements will be included.
 
   I can understand the need to swap from page to cursor, but was
   pleased
   that a single call was still available to return (or attempt to
   return) all friend/follower ids.  Now you are saying that, in
   addition
   to the changeover from page to cursor, you are also getting rid
 of
   this?
 
   Can you please confirm/deny?
 
   On Dec 22, 4:13 pm, Wilhelm Bierbaum wilh...@twitter.com
 wrote:
We noticed that some clients are still calling social graph
   methods
without cursor parameters. 

Re: [twitter-dev] Re: Social Graph API: Legacy data format will be eliminated 1/11/2010

2010-01-04 Thread Jesse Stay
Ditto PJB :-)

On Mon, Jan 4, 2010 at 8:12 PM, PJB pjbmancun...@gmail.com wrote:


 I think that's like asking someone: why do you eat food? But don't say
 because it tastes good or nourishes you, because we already know
 that! ;)

 You guys presumably set the 5000 ids per cursor limit by analyzing
 your user base and noting that one could still obtain the social graph
 for the vast majority of users with a single call.

 But this is a bit misleading.  For analytics-based apps, who aim to do
 near real-time analysis of relationships, the focus is typically on
 consumer brands who have a far larger than average number of
 relationships (e.g., 50k - 200k).

 This means that those apps are neck-deep in cursor-based stuff, and
 quickly realize the existing drawbacks, including, in order of
 significance:

 - Latency.  Fetching ids for a user with 3000 friends is comparable
 between the two calls.  But as you increment past 5000, the speed
 quickly peaks at a 5+x difference (I will include more benchmarks in a
 short while).  For example, fetching 80,000 friends via the get-all
 method takes on average 3 seconds; it takes, on average, 15 seconds
 with cursors.

 - Code complexity  elegance.  I would say that there is a 3x increase
 in code lines to account for cursors, from retrying failed cursors, to
 caching to account for cursor slowness, to UI changes to coddle
 impatient users.

 - Incomprehensibility.  While there are obviously very good reasons
 from Twitter's perspective (performance) to the cursor based model,
 there really is no apparent obvious benefit to API users for the ids
 calls.  I would make the case that a large majority of API uses of the
 ids calls need and require the entire social graph, not an incomplete
 one.  After all, we need to know what new relationships exist, but
 also what old relationships have failed.  To dole out the data in
 drips and drabs is like serving a pint of beer in sippy cups.  That is
 to say: most users need the entire social graph, so what is the use
 case, from an API user's perspective, of NOT maintaining at least one
 means to quickly, reliably, and efficiently get it in a single call?

 - API Barriers to entry.  Most of the aforementioned arguments are
 obviously from an API user's perspective, but there's something, too,
 for Twitter to consider.  Namely, by increasing the complexity and
 learning curve of particular API actions, you presumably further limit
 the pool of developers who will engage with that API.  That's probably
 a bad thing.

 - Limits Twitter 2.0 app development.  This, again, speaks to issues
 bearing on speed and complexity, but I think it is important.  The
 first few apps in any given media or innovation invariably have to do
 with basic functionality building blocks -- tweeting, following,
 showing tweets.  But the next wave almost always has to do with
 measurement and analysis.  By making such analysis more difficult, you
 forestall the critically important ability for brands, and others, to
 measure performance.

 - API users have requested it.  Shouldn't, ultimately, the use case
 for a particular API method simply be the fact that a number of API
 developers have requested that it remain?


 On Jan 4, 2:07 pm, Wilhelm Bierbaum wilh...@twitter.com wrote:
  Can everyone contribute their use case for this API method? I'm trying
  to fully understand the deficiencies of the cursor approach.
 
  Please don't include that cursors are slow or that they are charged
  against the rate limit, as those are known issues.
 
  Thanks.



[twitter-dev] Cannot create list with a specific slug, even if that slug doesn't exist in the account

2010-01-04 Thread LeeS - @semel
In my account, there's no list named 'design': 
http://twitter.com/shortyawards/design
results in a 404 page

When I try to create one with that name, I get numbers appended to it:

curl -u ..  -dname=design http://api.twitter.comtyawards/lists.xml
?xml version=1.0 encoding=UTF-8?
list
  id5397152/id
  namedesign/name
  full_name@shortyawards/design-21/full_name
  slugdesign-21/slug
  description/description
  subscriber_count0/subscriber_count

Each time we call the API, a new list with the same slug 'design-21'
is created.  This happens for four specific lists in our account, but
all the others are unaffected.

Any ideas how to solve this problem?

Lee


[twitter-dev] Re: Social Graph API: Legacy data format will be eliminated 1/11/2010

2010-01-04 Thread PJB

Some quick benchmarks...

Grabbed entire social graph for ~250 users, where each user has a
number of friends/followers between 0 and 80,000.  I randomly used
both the cursor and cursor-less API methods.

 5000 ids
cursor: 0.72 avg seconds
cursorless: 0.51 avg seconds

5000 to 10,000 ids
cursor: 1.42 avg seconds
cursorless: 0.94 avg seconds

1 to 80,000 ids
cursor: 2.82 avg seconds
cursorless: 1.21 avg seconds

5,000 to 80,000 ids
cursor: 4.28
cursorless: 1.59

10,000 to 80,000 ids
cursor: 5.23
cursorless: 1.82

20,000 to 80,000 ids
cursor: 6.82
cursorless: 2

40,000 to 80,000 ids
cursor: 9.5
cursorless: 3

60,000 to 80,000 ids
cursor: 12.25
cursorless: 3.12

On Jan 4, 7:58 pm, Jesse Stay jesses...@gmail.com wrote:
 Ditto PJB :-)

 On Mon, Jan 4, 2010 at 8:12 PM, PJB pjbmancun...@gmail.com wrote:

  I think that's like asking someone: why do you eat food? But don't say
  because it tastes good or nourishes you, because we already know
  that! ;)

  You guys presumably set the 5000 ids per cursor limit by analyzing
  your user base and noting that one could still obtain the social graph
  for the vast majority of users with a single call.

  But this is a bit misleading.  For analytics-based apps, who aim to do
  near real-time analysis of relationships, the focus is typically on
  consumer brands who have a far larger than average number of
  relationships (e.g., 50k - 200k).

  This means that those apps are neck-deep in cursor-based stuff, and
  quickly realize the existing drawbacks, including, in order of
  significance:

  - Latency.  Fetching ids for a user with 3000 friends is comparable
  between the two calls.  But as you increment past 5000, the speed
  quickly peaks at a 5+x difference (I will include more benchmarks in a
  short while).  For example, fetching 80,000 friends via the get-all
  method takes on average 3 seconds; it takes, on average, 15 seconds
  with cursors.

  - Code complexity  elegance.  I would say that there is a 3x increase
  in code lines to account for cursors, from retrying failed cursors, to
  caching to account for cursor slowness, to UI changes to coddle
  impatient users.

  - Incomprehensibility.  While there are obviously very good reasons
  from Twitter's perspective (performance) to the cursor based model,
  there really is no apparent obvious benefit to API users for the ids
  calls.  I would make the case that a large majority of API uses of the
  ids calls need and require the entire social graph, not an incomplete
  one.  After all, we need to know what new relationships exist, but
  also what old relationships have failed.  To dole out the data in
  drips and drabs is like serving a pint of beer in sippy cups.  That is
  to say: most users need the entire social graph, so what is the use
  case, from an API user's perspective, of NOT maintaining at least one
  means to quickly, reliably, and efficiently get it in a single call?

  - API Barriers to entry.  Most of the aforementioned arguments are
  obviously from an API user's perspective, but there's something, too,
  for Twitter to consider.  Namely, by increasing the complexity and
  learning curve of particular API actions, you presumably further limit
  the pool of developers who will engage with that API.  That's probably
  a bad thing.

  - Limits Twitter 2.0 app development.  This, again, speaks to issues
  bearing on speed and complexity, but I think it is important.  The
  first few apps in any given media or innovation invariably have to do
  with basic functionality building blocks -- tweeting, following,
  showing tweets.  But the next wave almost always has to do with
  measurement and analysis.  By making such analysis more difficult, you
  forestall the critically important ability for brands, and others, to
  measure performance.

  - API users have requested it.  Shouldn't, ultimately, the use case
  for a particular API method simply be the fact that a number of API
  developers have requested that it remain?

  On Jan 4, 2:07 pm, Wilhelm Bierbaum wilh...@twitter.com wrote:
   Can everyone contribute their use case for this API method? I'm trying
   to fully understand the deficiencies of the cursor approach.

   Please don't include that cursors are slow or that they are charged
   against the rate limit, as those are known issues.

   Thanks.




Re: [twitter-dev] Re: Social Graph API: Legacy data format will be eliminated 1/11/2010

2010-01-04 Thread John Kalucki
The backend datastore returns following blocks in constant time,
regardless of the cursor depth. When I test a user with 100k+
followers via twitter.com using a ruby script, I see each cursored
block return in between 1.3 and 2.0 seconds, n=46, avg 1.59 seconds,
median 1.47 sec, stddev of .377, (home DSL, shared by several people
at the moment). So, it seems that we're returning the data over home
DSL at between 2,500 and 4,000 ids per second, which seems like a
perfectly reasonable rate and variance.

If I recall correctly, the cursorless methods are just shunted to
the first block each time, and thus represent a constant, incomplete,
amount of data...

Looking into my crystal ball, if you want a lot more than several
thousand widgets per second from Twitter, you probably aren't going to
get them via REST, and you will probably have some sort of business
relationship in place with Twitter.

-John Kalucki
http://twitter.com/jkalucki
Services, Twitter Inc.

(A slice of data below)

url /followers/ids/alexa_chung.xml?cursor=-1
fetch time = 1.478542
url /followers/ids/alexa_chung.xml?cursor=1322524362256299608
fetch time = 2.044831
url /followers/ids/alexa_chung.xml?cursor=1321126009663170021
fetch time = 1.350035
url /followers/ids/alexa_chung.xml?cursor=1319359640017038524
fetch time = 1.44636
url /followers/ids/alexa_chung.xml?cursor=1317653620096535558
fetch time = 1.955163
url /followers/ids/alexa_chung.xml?cursor=1316184964685221966
fetch time = 1.326226
url /followers/ids/alexa_chung.xml?cursor=1314866514116423204
fetch time = 1.96824
url /followers/ids/alexa_chung.xml?cursor=1313551933690106944
fetch time = 1.513922
url /followers/ids/alexa_chung.xml?cursor=1312201296962214944
fetch time = 1.59179
url /followers/ids/alexa_chung.xml?cursor=1311363260604388613
fetch time = 2.259924
url /followers/ids/alexa_chung.xml?cursor=1310627455188010229
fetch time = 1.706438
url /followers/ids/alexa_chung.xml?cursor=1309772694575801646
fetch time = 1.460413



On Mon, Jan 4, 2010 at 8:18 PM, PJB pjbmancun...@gmail.com wrote:

 Some quick benchmarks...

 Grabbed entire social graph for ~250 users, where each user has a
 number of friends/followers between 0 and 80,000.  I randomly used
 both the cursor and cursor-less API methods.

  5000 ids
 cursor: 0.72 avg seconds
 cursorless: 0.51 avg seconds

 5000 to 10,000 ids
 cursor: 1.42 avg seconds
 cursorless: 0.94 avg seconds

 1 to 80,000 ids
 cursor: 2.82 avg seconds
 cursorless: 1.21 avg seconds

 5,000 to 80,000 ids
 cursor: 4.28
 cursorless: 1.59

 10,000 to 80,000 ids
 cursor: 5.23
 cursorless: 1.82

 20,000 to 80,000 ids
 cursor: 6.82
 cursorless: 2

 40,000 to 80,000 ids
 cursor: 9.5
 cursorless: 3

 60,000 to 80,000 ids
 cursor: 12.25
 cursorless: 3.12

 On Jan 4, 7:58 pm, Jesse Stay jesses...@gmail.com wrote:
 Ditto PJB :-)

 On Mon, Jan 4, 2010 at 8:12 PM, PJB pjbmancun...@gmail.com wrote:

  I think that's like asking someone: why do you eat food? But don't say
  because it tastes good or nourishes you, because we already know
  that! ;)

  You guys presumably set the 5000 ids per cursor limit by analyzing
  your user base and noting that one could still obtain the social graph
  for the vast majority of users with a single call.

  But this is a bit misleading.  For analytics-based apps, who aim to do
  near real-time analysis of relationships, the focus is typically on
  consumer brands who have a far larger than average number of
  relationships (e.g., 50k - 200k).

  This means that those apps are neck-deep in cursor-based stuff, and
  quickly realize the existing drawbacks, including, in order of
  significance:

  - Latency.  Fetching ids for a user with 3000 friends is comparable
  between the two calls.  But as you increment past 5000, the speed
  quickly peaks at a 5+x difference (I will include more benchmarks in a
  short while).  For example, fetching 80,000 friends via the get-all
  method takes on average 3 seconds; it takes, on average, 15 seconds
  with cursors.

  - Code complexity  elegance.  I would say that there is a 3x increase
  in code lines to account for cursors, from retrying failed cursors, to
  caching to account for cursor slowness, to UI changes to coddle
  impatient users.

  - Incomprehensibility.  While there are obviously very good reasons
  from Twitter's perspective (performance) to the cursor based model,
  there really is no apparent obvious benefit to API users for the ids
  calls.  I would make the case that a large majority of API uses of the
  ids calls need and require the entire social graph, not an incomplete
  one.  After all, we need to know what new relationships exist, but
  also what old relationships have failed.  To dole out the data in
  drips and drabs is like serving a pint of beer in sippy cups.  That is
  to say: most users need the entire social graph, so what is the use
  case, from an API user's perspective, of NOT maintaining at least one
  means to quickly, reliably, and efficiently get it in 

Re: [twitter-dev] Re: Social Graph API: Legacy data format will be eliminated 1/11/2010

2010-01-04 Thread Jesse Stay
Also, how do we get a business relationship set up?  I've been asking for
that for years now.

Jesse

On Mon, Jan 4, 2010 at 10:16 PM, Jesse Stay jesses...@gmail.com wrote:

 John, how are things going on the real-time social graph APIs?  That would
 solve a lot of things for me surrounding this.

 Jesse


 On Mon, Jan 4, 2010 at 9:58 PM, John Kalucki j...@twitter.com wrote:

 The backend datastore returns following blocks in constant time,
 regardless of the cursor depth. When I test a user with 100k+
 followers via twitter.com using a ruby script, I see each cursored
 block return in between 1.3 and 2.0 seconds, n=46, avg 1.59 seconds,
 median 1.47 sec, stddev of .377, (home DSL, shared by several people
 at the moment). So, it seems that we're returning the data over home
 DSL at between 2,500 and 4,000 ids per second, which seems like a
 perfectly reasonable rate and variance.

 If I recall correctly, the cursorless methods are just shunted to
 the first block each time, and thus represent a constant, incomplete,
 amount of data...

 Looking into my crystal ball, if you want a lot more than several
 thousand widgets per second from Twitter, you probably aren't going to
 get them via REST, and you will probably have some sort of business
 relationship in place with Twitter.

 -John Kalucki
 http://twitter.com/jkalucki
 Services, Twitter Inc.

 (A slice of data below)

 url /followers/ids/alexa_chung.xml?cursor=-1
 fetch time = 1.478542
 url /followers/ids/alexa_chung.xml?cursor=1322524362256299608
 fetch time = 2.044831
 url /followers/ids/alexa_chung.xml?cursor=1321126009663170021
 fetch time = 1.350035
 url /followers/ids/alexa_chung.xml?cursor=1319359640017038524
 fetch time = 1.44636
 url /followers/ids/alexa_chung.xml?cursor=1317653620096535558
 fetch time = 1.955163
 url /followers/ids/alexa_chung.xml?cursor=1316184964685221966
 fetch time = 1.326226
 url /followers/ids/alexa_chung.xml?cursor=1314866514116423204
 fetch time = 1.96824
 url /followers/ids/alexa_chung.xml?cursor=1313551933690106944
 fetch time = 1.513922
 url /followers/ids/alexa_chung.xml?cursor=1312201296962214944
 fetch time = 1.59179
 url /followers/ids/alexa_chung.xml?cursor=1311363260604388613
 fetch time = 2.259924
 url /followers/ids/alexa_chung.xml?cursor=1310627455188010229
 fetch time = 1.706438
 url /followers/ids/alexa_chung.xml?cursor=1309772694575801646
 fetch time = 1.460413



 On Mon, Jan 4, 2010 at 8:18 PM, PJB pjbmancun...@gmail.com wrote:
 
  Some quick benchmarks...
 
  Grabbed entire social graph for ~250 users, where each user has a
  number of friends/followers between 0 and 80,000.  I randomly used
  both the cursor and cursor-less API methods.
 
   5000 ids
  cursor: 0.72 avg seconds
  cursorless: 0.51 avg seconds
 
  5000 to 10,000 ids
  cursor: 1.42 avg seconds
  cursorless: 0.94 avg seconds
 
  1 to 80,000 ids
  cursor: 2.82 avg seconds
  cursorless: 1.21 avg seconds
 
  5,000 to 80,000 ids
  cursor: 4.28
  cursorless: 1.59
 
  10,000 to 80,000 ids
  cursor: 5.23
  cursorless: 1.82
 
  20,000 to 80,000 ids
  cursor: 6.82
  cursorless: 2
 
  40,000 to 80,000 ids
  cursor: 9.5
  cursorless: 3
 
  60,000 to 80,000 ids
  cursor: 12.25
  cursorless: 3.12
 
  On Jan 4, 7:58 pm, Jesse Stay jesses...@gmail.com wrote:
  Ditto PJB :-)
 
  On Mon, Jan 4, 2010 at 8:12 PM, PJB pjbmancun...@gmail.com wrote:
 
   I think that's like asking someone: why do you eat food? But don't
 say
   because it tastes good or nourishes you, because we already know
   that! ;)
 
   You guys presumably set the 5000 ids per cursor limit by analyzing
   your user base and noting that one could still obtain the social
 graph
   for the vast majority of users with a single call.
 
   But this is a bit misleading.  For analytics-based apps, who aim to
 do
   near real-time analysis of relationships, the focus is typically on
   consumer brands who have a far larger than average number of
   relationships (e.g., 50k - 200k).
 
   This means that those apps are neck-deep in cursor-based stuff, and
   quickly realize the existing drawbacks, including, in order of
   significance:
 
   - Latency.  Fetching ids for a user with 3000 friends is comparable
   between the two calls.  But as you increment past 5000, the speed
   quickly peaks at a 5+x difference (I will include more benchmarks in
 a
   short while).  For example, fetching 80,000 friends via the get-all
   method takes on average 3 seconds; it takes, on average, 15 seconds
   with cursors.
 
   - Code complexity  elegance.  I would say that there is a 3x
 increase
   in code lines to account for cursors, from retrying failed cursors,
 to
   caching to account for cursor slowness, to UI changes to coddle
   impatient users.
 
   - Incomprehensibility.  While there are obviously very good reasons
   from Twitter's perspective (performance) to the cursor based model,
   there really is no apparent obvious benefit to API users for the ids
   calls.  I would make the case that a large 

[twitter-dev] Re: Social Graph API: Legacy data format will be eliminated 1/11/2010

2010-01-04 Thread PJB


On Jan 4, 8:58 pm, John Kalucki j...@twitter.com wrote:
 at the moment). So, it seems that we're returning the data over home
 DSL at between 2,500 and 4,000 ids per second, which seems like a
 perfectly reasonable rate and variance.

It's certainly not reasonable to expect it to take 10+ seconds to get
25,000 to 40,000 ids, PARTICULARLY when existing methods, for whatever
reason, return the same data in less than 2 seconds.  Twitter is being
incredibly short-sighted if they think this is indeed reasonable.

Some of us have built applications around your EXISTING APIs, and to
now suggest that we may need formal business relationships to
continue to use such APIs is seriously disquieting.

Disgusted...




Re: [twitter-dev] conversation chain

2010-01-04 Thread pallabi paul
Hi ,
   Thanks for your reply.But actually i am talking about the
conversation thread returned by the twitter .I am using this url to get the
conversation thread as xml.

http://search.twitter.com/search/thread/statusid

Is there any scenario when twitter will return only one message in this xml
? If there, then what is that scenario ?

On Mon, Jan 4, 2010 at 7:20 PM, Cameron Kaiser spec...@floodgap.com wrote:

  is this possible in any way that one twitter conversation will contain
  only one tweet ? I am trying to show the twitter conversation chain in
  my application. but sometimes it happens that the conversation chain
  contains only one tweet. Now my understanding is that if a
  conversation contains only one tweet, then how could it be a
  conversation . Is this any bug of my application or twitter sometimes
  returns the conversation chain in this way ?

 If there is no reply-to information, you cannot further chain a tweet, even
 if it is logically part of a conversation.

 --
  personal:
 http://www.cameronkaiser.com/ --
  Cameron Kaiser * Floodgap Systems * www.floodgap.com *
 ckai...@floodgap.com
 -- Test-tube babies shouldn't throw stones.
 ---



Re: [twitter-dev] Re: Social Graph API: Legacy data format will be eliminated 1/11/2010

2010-01-04 Thread John Kalucki
Ryan Sarver announced that we're going to provide an agreement
framework for Tweet data at Le Web last month. Until all that
licensing machinery is working well, we probably won't put any effort
into syndicating the social graph. At this point, social graph
syndication appears to be totally unformed, completely up in the air,
and any predictions would be unwise.

-John Kalucki
http://twitter.com/jkalucki
Services, Twitter Inc.


On Mon, Jan 4, 2010 at 9:16 PM, Jesse Stay jesses...@gmail.com wrote:
 John, how are things going on the real-time social graph APIs?  That would
 solve a lot of things for me surrounding this.
 Jesse

 On Mon, Jan 4, 2010 at 9:58 PM, John Kalucki j...@twitter.com wrote:

 The backend datastore returns following blocks in constant time,
 regardless of the cursor depth. When I test a user with 100k+
 followers via twitter.com using a ruby script, I see each cursored
 block return in between 1.3 and 2.0 seconds, n=46, avg 1.59 seconds,
 median 1.47 sec, stddev of .377, (home DSL, shared by several people
 at the moment). So, it seems that we're returning the data over home
 DSL at between 2,500 and 4,000 ids per second, which seems like a
 perfectly reasonable rate and variance.

 If I recall correctly, the cursorless methods are just shunted to
 the first block each time, and thus represent a constant, incomplete,
 amount of data...

 Looking into my crystal ball, if you want a lot more than several
 thousand widgets per second from Twitter, you probably aren't going to
 get them via REST, and you will probably have some sort of business
 relationship in place with Twitter.

 -John Kalucki
 http://twitter.com/jkalucki
 Services, Twitter Inc.

 (A slice of data below)

 url /followers/ids/alexa_chung.xml?cursor=-1
 fetch time = 1.478542
 url /followers/ids/alexa_chung.xml?cursor=1322524362256299608
 fetch time = 2.044831
 url /followers/ids/alexa_chung.xml?cursor=1321126009663170021
 fetch time = 1.350035
 url /followers/ids/alexa_chung.xml?cursor=1319359640017038524
 fetch time = 1.44636
 url /followers/ids/alexa_chung.xml?cursor=1317653620096535558
 fetch time = 1.955163
 url /followers/ids/alexa_chung.xml?cursor=1316184964685221966
 fetch time = 1.326226
 url /followers/ids/alexa_chung.xml?cursor=1314866514116423204
 fetch time = 1.96824
 url /followers/ids/alexa_chung.xml?cursor=1313551933690106944
 fetch time = 1.513922
 url /followers/ids/alexa_chung.xml?cursor=1312201296962214944
 fetch time = 1.59179
 url /followers/ids/alexa_chung.xml?cursor=1311363260604388613
 fetch time = 2.259924
 url /followers/ids/alexa_chung.xml?cursor=1310627455188010229
 fetch time = 1.706438
 url /followers/ids/alexa_chung.xml?cursor=1309772694575801646
 fetch time = 1.460413



 On Mon, Jan 4, 2010 at 8:18 PM, PJB pjbmancun...@gmail.com wrote:
 
  Some quick benchmarks...
 
  Grabbed entire social graph for ~250 users, where each user has a
  number of friends/followers between 0 and 80,000.  I randomly used
  both the cursor and cursor-less API methods.
 
   5000 ids
  cursor: 0.72 avg seconds
  cursorless: 0.51 avg seconds
 
  5000 to 10,000 ids
  cursor: 1.42 avg seconds
  cursorless: 0.94 avg seconds
 
  1 to 80,000 ids
  cursor: 2.82 avg seconds
  cursorless: 1.21 avg seconds
 
  5,000 to 80,000 ids
  cursor: 4.28
  cursorless: 1.59
 
  10,000 to 80,000 ids
  cursor: 5.23
  cursorless: 1.82
 
  20,000 to 80,000 ids
  cursor: 6.82
  cursorless: 2
 
  40,000 to 80,000 ids
  cursor: 9.5
  cursorless: 3
 
  60,000 to 80,000 ids
  cursor: 12.25
  cursorless: 3.12
 
  On Jan 4, 7:58 pm, Jesse Stay jesses...@gmail.com wrote:
  Ditto PJB :-)
 
  On Mon, Jan 4, 2010 at 8:12 PM, PJB pjbmancun...@gmail.com wrote:
 
   I think that's like asking someone: why do you eat food? But don't
   say
   because it tastes good or nourishes you, because we already know
   that! ;)
 
   You guys presumably set the 5000 ids per cursor limit by analyzing
   your user base and noting that one could still obtain the social
   graph
   for the vast majority of users with a single call.
 
   But this is a bit misleading.  For analytics-based apps, who aim to
   do
   near real-time analysis of relationships, the focus is typically on
   consumer brands who have a far larger than average number of
   relationships (e.g., 50k - 200k).
 
   This means that those apps are neck-deep in cursor-based stuff, and
   quickly realize the existing drawbacks, including, in order of
   significance:
 
   - Latency.  Fetching ids for a user with 3000 friends is comparable
   between the two calls.  But as you increment past 5000, the speed
   quickly peaks at a 5+x difference (I will include more benchmarks in
   a
   short while).  For example, fetching 80,000 friends via the get-all
   method takes on average 3 seconds; it takes, on average, 15 seconds
   with cursors.
 
   - Code complexity  elegance.  I would say that there is a 3x
   increase
   in code lines to account for cursors, from retrying failed cursors,
   to
   caching 

[twitter-dev] rate limit: plz reply as soon as possible : urgent

2010-01-04 Thread viv
I am frequently hitting rate limit. i know it happens when you do
more
than 150 search API call in a hour
int a[] =twitter.getFriendsIDs(xyz).getIDs();
int b[][] = new int[a.length][]
for(int i=0;ia.length;i++)
b[i] = twitter.getFriendsIDs(a[i]).getIDs();
 now the problem is a.length = 148 , that xyz has 148 friends
and when we go on to compute b[i], it exceeds 150 and face rate limit
problem
is there any way to encounter this rate limit problem, without
increasing rate limit 20,000 by asking twitter


Re: [twitter-dev] Re: Social Graph API: Legacy data format will be eliminated 1/11/2010

2010-01-04 Thread John Kalucki
The existing APIs stopped providing accurate data about a year ago
and degraded substantially over a period of just a few months. Now the
only data store for social graph data requires cursors to access
complete sets. Pagination is just not possible with the same latency
at this scale without an order of magnitude or two increase in cost.
So, instead of hardware units in the tens and hundreds, think about
the same in the thousands and tens of thousands.

These APIs and their now decommissioned backing stores were developed
when having 20,000 followers was a lot. We're an order of magnitude or
two beyond that point along nearly every dimension. Accounts.
Followers per account. Tweets per second. Etc. As systems evolve, some
evolutionary paths become extinct.

Given boundless resources, the best we could do for a REST API, as
Marcel has alluded, is to do the cursoring for you and aggregate many
blocks into much larger responses. This wouldn't work very well for at
least two immediate reasons: 1) Running a system with multimodal
service times is a nightmare -- we'd have to provision a specific
endpoint for such a resource. 2) Ruby GC chokes on lots of objects.
We'd have to consider implementing this resource in another stack, or
do a lot of tuning. All this to build the opposite of what most
applications want: a real-time stream of graph deltas for a set of
accounts, or the list of recent set operations since the last poll --
and rarely, if ever, the entire following set.

Also, I'm a little rusty on the details on the social graph api, but
please detail which public resources allow retrieval of 40,000
followers in two seconds. I'd be very interested in looking at the
implementing code on our end. A curl timing would be nice (time curl
URL  /dev/null) too.

-John Kalucki
http://twitter.com/jkalucki
Services, Twitter Inc.


On Mon, Jan 4, 2010 at 9:18 PM, PJB pjbmancun...@gmail.com wrote:


 On Jan 4, 8:58 pm, John Kalucki j...@twitter.com wrote:
 at the moment). So, it seems that we're returning the data over home
 DSL at between 2,500 and 4,000 ids per second, which seems like a
 perfectly reasonable rate and variance.

 It's certainly not reasonable to expect it to take 10+ seconds to get
 25,000 to 40,000 ids, PARTICULARLY when existing methods, for whatever
 reason, return the same data in less than 2 seconds.  Twitter is being
 incredibly short-sighted if they think this is indeed reasonable.

 Some of us have built applications around your EXISTING APIs, and to
 now suggest that we may need formal business relationships to
 continue to use such APIs is seriously disquieting.

 Disgusted...





[twitter-dev] Re: Social Graph API: Legacy data format will be eliminated 1/11/2010

2010-01-04 Thread PJB

As noted in this thread, the fact that cursor-less methods for friends/
followers ids will be deprecated was newly announced on December 22.

In fact, the API documentation still clearly indicates that cursors
are optional, and that their absence will return a complete social
graph.  E.g.:

http://apiwiki.twitter.com/Twitter-REST-API-Method%3A-followers%C2%A0ids

(If the cursor parameter is not provided, all IDs are attempted to be
returned)

The example at the bottom of that page gives a good example of
retrieving 300,000+ ids in several seconds:

http://twitter.com/followers/ids.xml?screen_name=dougw

Of course, retrieving 20-40k users is significantly faster.

Again, many of us have built apps around cursor-less API calls.  To
now deprecate them, with just a few days warning over the holidays, is
clearly inappropriate and uncalled for.  Similarly, to announce that
we must now expect 5x slowness when doing the same calls, when these
existing methods work well, is shocking.

Many developers live and die by the API documentation.  It's a really
fouled-up situation when the API documentation is so totally wrong,
right?

I urge those folks addressing this issue to preserve the cursor-less
methods.  Barring that, I urge them to return at least 25,000 ids per
cursor (as you note, time progression has made 5000 per call
antiquated and ineffective for today's Twitter user) and grant at
least 3 months before deprecation.

On Jan 4, 10:23 pm, John Kalucki j...@twitter.com wrote:
 The existing APIs stopped providing accurate data about a year ago
 and degraded substantially over a period of just a few months. Now the
 only data store for social graph data requires cursors to access
 complete sets. Pagination is just not possible with the same latency
 at this scale without an order of magnitude or two increase in cost.
 So, instead of hardware units in the tens and hundreds, think about
 the same in the thousands and tens of thousands.

 These APIs and their now decommissioned backing stores were developed
 when having 20,000 followers was a lot. We're an order of magnitude or
 two beyond that point along nearly every dimension. Accounts.
 Followers per account. Tweets per second. Etc. As systems evolve, some
 evolutionary paths become extinct.

 Given boundless resources, the best we could do for a REST API, as
 Marcel has alluded, is to do the cursoring for you and aggregate many
 blocks into much larger responses. This wouldn't work very well for at
 least two immediate reasons: 1) Running a system with multimodal
 service times is a nightmare -- we'd have to provision a specific
 endpoint for such a resource. 2) Ruby GC chokes on lots of objects.
 We'd have to consider implementing this resource in another stack, or
 do a lot of tuning. All this to build the opposite of what most
 applications want: a real-time stream of graph deltas for a set of
 accounts, or the list of recent set operations since the last poll --
 and rarely, if ever, the entire following set.

 Also, I'm a little rusty on the details on the social graph api, but
 please detail which public resources allow retrieval of 40,000
 followers in two seconds. I'd be very interested in looking at the
 implementing code on our end. A curl timing would be nice (time curl
 URL  /dev/null) too.

 -John Kaluckihttp://twitter.com/jkalucki
 Services, Twitter Inc.

 On Mon, Jan 4, 2010 at 9:18 PM, PJB pjbmancun...@gmail.com wrote:

  On Jan 4, 8:58 pm, John Kalucki j...@twitter.com wrote:
  at the moment). So, it seems that we're returning the data over home
  DSL at between 2,500 and 4,000 ids per second, which seems like a
  perfectly reasonable rate and variance.

  It's certainly not reasonable to expect it to take 10+ seconds to get
  25,000 to 40,000 ids, PARTICULARLY when existing methods, for whatever
  reason, return the same data in less than 2 seconds.  Twitter is being
  incredibly short-sighted if they think this is indeed reasonable.

  Some of us have built applications around your EXISTING APIs, and to
  now suggest that we may need formal business relationships to
  continue to use such APIs is seriously disquieting.

  Disgusted...




[twitter-dev] Re: Social Graph API: Legacy data format will be eliminated 1/11/2010

2010-01-04 Thread PJB
~300,000 ids in 15 seconds:

$ time curl http://twitter.com/followers/ids.xml?screen_name=dougw  /
dev/null
  % Total% Received % Xferd  Average Speed   TimeTime
Time  Current
 Dload  Upload   Total   Spent
Left  Speed
100 5545k  100 5545k0 0   346k  0  0:00:15  0:00:15
--:--:--  474k

real0m15.994s
user0m0.021s
sys 0m0.061s

===

~100,000 ids in 6 seconds:

$ time curl http://twitter.com/followers/ids.xml?screen_name=karlrove
 /dev/null
  % Total% Received % Xferd  Average Speed   TimeTime
Time  Current
 Dload  Upload   Total   Spent
Left  Speed
100 1700k  100 1700k0 0   286k  0  0:00:05  0:00:05
--:--:--  433k

real0m5.932s
user0m0.010s
sys 0m0.025s

===

12,000 ids in 1.2 seconds:

$ time curl http://twitter.com/followers/ids.xml?screen_name=markos  /
dev/null
  % Total% Received % Xferd  Average Speed   TimeTime
Time  Current
 Dload  Upload   Total   Spent
Left  Speed
100  213k  100  213k0 0   168k  0  0:00:01  0:00:01
--:--:--  257k

real0m1.269s
user0m0.004s
sys 0m0.003s

===

These calls are night-and-day better than cursor-based calls.  Again,
I plead with those folks pushing the bits about to preserve what is
officially documented.



On Jan 4, 10:23 pm, John Kalucki j...@twitter.com wrote:
 The existing APIs stopped providing accurate data about a year ago
 and degraded substantially over a period of just a few months. Now the
 only data store for social graph data requires cursors to access
 complete sets. Pagination is just not possible with the same latency
 at this scale without an order of magnitude or two increase in cost.
 So, instead of hardware units in the tens and hundreds, think about
 the same in the thousands and tens of thousands.

 These APIs and their now decommissioned backing stores were developed
 when having 20,000 followers was a lot. We're an order of magnitude or
 two beyond that point along nearly every dimension. Accounts.
 Followers per account. Tweets per second. Etc. As systems evolve, some
 evolutionary paths become extinct.

 Given boundless resources, the best we could do for a REST API, as
 Marcel has alluded, is to do the cursoring for you and aggregate many
 blocks into much larger responses. This wouldn't work very well for at
 least two immediate reasons: 1) Running a system with multimodal
 service times is a nightmare -- we'd have to provision a specific
 endpoint for such a resource. 2) Ruby GC chokes on lots of objects.
 We'd have to consider implementing this resource in another stack, or
 do a lot of tuning. All this to build the opposite of what most
 applications want: a real-time stream of graph deltas for a set of
 accounts, or the list of recent set operations since the last poll --
 and rarely, if ever, the entire following set.

 Also, I'm a little rusty on the details on the social graph api, but
 please detail which public resources allow retrieval of 40,000
 followers in two seconds. I'd be very interested in looking at the
 implementing code on our end. A curl timing would be nice (time curl
 URL  /dev/null) too.

 -John Kaluckihttp://twitter.com/jkalucki
 Services, Twitter Inc.

 On Mon, Jan 4, 2010 at 9:18 PM, PJB pjbmancun...@gmail.com wrote:

  On Jan 4, 8:58 pm, John Kalucki j...@twitter.com wrote:
  at the moment). So, it seems that we're returning the data over home
  DSL at between 2,500 and 4,000 ids per second, which seems like a
  perfectly reasonable rate and variance.

  It's certainly not reasonable to expect it to take 10+ seconds to get
  25,000 to 40,000 ids, PARTICULARLY when existing methods, for whatever
  reason, return the same data in less than 2 seconds.  Twitter is being
  incredibly short-sighted if they think this is indeed reasonable.

  Some of us have built applications around your EXISTING APIs, and to
  now suggest that we may need formal business relationships to
  continue to use such APIs is seriously disquieting.

  Disgusted...




Re: [twitter-dev] Re: Social Graph API: Legacy data format will be eliminated 1/11/2010

2010-01-04 Thread Jesse Stay
Again, ditto PJB - just making sure the Twitter devs don't think PJB is
alone in this.  I'm sure Dewald and many other developers, including those
unaware of this (is it even on the status blog?) agree.  I'm also seeing
similar results to PJB in my benchmarks. cursor-less is much, much faster.
 At a maximum, put a max on the cursor-less calls (200,000 should be
sufficient).  Please don't take them away.

Jesse

On Mon, Jan 4, 2010 at 11:40 PM, PJB pjbmancun...@gmail.com wrote:


 As noted in this thread, the fact that cursor-less methods for friends/
 followers ids will be deprecated was newly announced on December 22.

 In fact, the API documentation still clearly indicates that cursors
 are optional, and that their absence will return a complete social
 graph.  E.g.:

 http://apiwiki.twitter.com/Twitter-REST-API-Method%3A-followers%C2%A0ids

 (If the cursor parameter is not provided, all IDs are attempted to be
 returned)

 The example at the bottom of that page gives a good example of
 retrieving 300,000+ ids in several seconds:

 http://twitter.com/followers/ids.xml?screen_name=dougw

 Of course, retrieving 20-40k users is significantly faster.

 Again, many of us have built apps around cursor-less API calls.  To
 now deprecate them, with just a few days warning over the holidays, is
 clearly inappropriate and uncalled for.  Similarly, to announce that
 we must now expect 5x slowness when doing the same calls, when these
 existing methods work well, is shocking.

 Many developers live and die by the API documentation.  It's a really
 fouled-up situation when the API documentation is so totally wrong,
 right?

 I urge those folks addressing this issue to preserve the cursor-less
 methods.  Barring that, I urge them to return at least 25,000 ids per
 cursor (as you note, time progression has made 5000 per call
 antiquated and ineffective for today's Twitter user) and grant at
 least 3 months before deprecation.

 On Jan 4, 10:23 pm, John Kalucki j...@twitter.com wrote:
  The existing APIs stopped providing accurate data about a year ago
  and degraded substantially over a period of just a few months. Now the
  only data store for social graph data requires cursors to access
  complete sets. Pagination is just not possible with the same latency
  at this scale without an order of magnitude or two increase in cost.
  So, instead of hardware units in the tens and hundreds, think about
  the same in the thousands and tens of thousands.
 
  These APIs and their now decommissioned backing stores were developed
  when having 20,000 followers was a lot. We're an order of magnitude or
  two beyond that point along nearly every dimension. Accounts.
  Followers per account. Tweets per second. Etc. As systems evolve, some
  evolutionary paths become extinct.
 
  Given boundless resources, the best we could do for a REST API, as
  Marcel has alluded, is to do the cursoring for you and aggregate many
  blocks into much larger responses. This wouldn't work very well for at
  least two immediate reasons: 1) Running a system with multimodal
  service times is a nightmare -- we'd have to provision a specific
  endpoint for such a resource. 2) Ruby GC chokes on lots of objects.
  We'd have to consider implementing this resource in another stack, or
  do a lot of tuning. All this to build the opposite of what most
  applications want: a real-time stream of graph deltas for a set of
  accounts, or the list of recent set operations since the last poll --
  and rarely, if ever, the entire following set.
 
  Also, I'm a little rusty on the details on the social graph api, but
  please detail which public resources allow retrieval of 40,000
  followers in two seconds. I'd be very interested in looking at the
  implementing code on our end. A curl timing would be nice (time curl
  URL  /dev/null) too.
 
  -John Kaluckihttp://twitter.com/jkalucki
  Services, Twitter Inc.
 
  On Mon, Jan 4, 2010 at 9:18 PM, PJB pjbmancun...@gmail.com wrote:
 
   On Jan 4, 8:58 pm, John Kalucki j...@twitter.com wrote:
   at the moment). So, it seems that we're returning the data over home
   DSL at between 2,500 and 4,000 ids per second, which seems like a
   perfectly reasonable rate and variance.
 
   It's certainly not reasonable to expect it to take 10+ seconds to get
   25,000 to 40,000 ids, PARTICULARLY when existing methods, for whatever
   reason, return the same data in less than 2 seconds.  Twitter is being
   incredibly short-sighted if they think this is indeed reasonable.
 
   Some of us have built applications around your EXISTING APIs, and to
   now suggest that we may need formal business relationships to
   continue to use such APIs is seriously disquieting.
 
   Disgusted...