If you are pulling down the entire social graph, why not use the
social graph calls which would deliver all 7000 ids in 2 calls?

You can also parallelize this process by looping through different
users on each thread instead of using each thread to grab a different
page/cursor of the same user.

Regarding the code issue you submitted, if you have the users cached
locally, you could use the social graph methods to determine the
missing/new 2k users pretty quickly using the social graph methods and
comparing ids.

-Chad

On Wed, Oct 14, 2009 at 9:50 PM, Tim Haines <tmhai...@gmail.com> wrote:
>
> Hi Chad,
>
> Statuses/followers.
>
> I've just timed another attempt - it took 25 minutes to retrieve 17957
> followers with statuses/followers.
>
> Is there anything I can elaborate on in the filed issue to make it
> clearer?
>
> Tim.
>
> On Oct 15, 2:42 pm, Chad Etzel <c...@twitter.com> wrote:
>> Hi Tim,
>>
>> You said "Retrieving 7000 followers just took > 20 minutes for me."
>> Can you explain what you meant by that?
>>
>> Are you using the friends/ids, followers/ids methods or the
>> statuses/friends, statuses/followers methods?
>>
>> -Chad
>>
>>
>>
>> On Wed, Oct 14, 2009 at 8:12 PM, Tim Haines <tmhai...@gmail.com> wrote:
>>
>> > Hi'ya,
>>
>> > I'm migrating my code to use cursors at the moment.  It's frustrating
>> > that calls need to be synchronous rather than how paged calls could be
>> > asynchronous.  Retrieving 7000 followers just took > 20 minutes for
>> > me.
>>
>> > I filed an issue that proposes a solution here:
>> >http://code.google.com/p/twitter-api/issues/detail?id=1078  If you
>> > retrieve friends or followers, please take a look and give it a star
>> > if it's important to you.
>>
>> > If anyone can suggest a work around for this, I'd be happy to hear it.
>>
>> > Cheers,
>>
>> > Tim.
>

Reply via email to