On Sat, Sep 5, 2009 at 2:54 AM, Michael Steuer<mste...@gmail.com> wrote:
> Do you have any code examples for this?

http://twitreport.tntluoma.com/id-to-name.html.sh

It's a work in progress

> So you basically maintain a local db of id-to- screen_names?

yes. each file in a given directory is named with the ID. Each file
has two lines: the first line is the users' "full name" and the second
is their "screenname"

If I need to lookup an ID, I just have to look to see if a file exists
with a matching filename. If it doesn't, I call Twitter.

Due to the nature of the way Twitter works, most of the people who
started using it are people who follow me (which is how they found out
about it) so there's a lot of overlap of "friendship circles" or
whatever you want to call them. I haven't done much advertising of the
service, which means that it hasn't grown beyond its means.


> Do you get all of a user's friend/follower id's and then look up their screen 
> names with
> individual API calls?

Yes, if not cached.


> What do you do when a user has more relationships than the API
> limit will allow you to query?

I have placed an artificial limit on the number of relationships the
script will check. I think it's 2,000. So if @sween decides to follow
@oprah and @oprah uses TwitReports, she's not going to get as good of
as report as others.

However, for the majority of the cases, I usually see people with
fewer than 2,000 followers following people with fewer than 2,000
followers.


> And what about users that change their
> screen name? In short, can you provide a bit more background to your
> methods?

Changing screen names is the biggest drawback, of course. The new
version of the script has built-in expiration which can be set to any
number of days. I've set mine to 7.

I wish that follower/friends information came with names as well as
IDs, but everything I've heard from Twitter about this on this list
tells me not to hold my breath…

TjL

ps — perhaps a better plan than just expiring the data would be
validating it, which is to say: every 7/14/30 days, check to see if
the information is still accurate. This could be done at off-peak
hours (say 3-6am eastern time USA) when my need for API hits is lower.

Reply via email to