[twitter-dev] verify_credentials longevity?

2010-08-30 Thread Jud
With the move to OAuth, are we going to see verify_credentials
deprecate?

http://apiwiki.twitter.com/Twitter-REST-API-Method%3A-account%C2%A0verify_credentials

-- 
Twitter developer documentation and resources: http://dev.twitter.com/doc
API updates via Twitter: http://twitter.com/twitterapi
Issues/Enhancements Tracker: http://code.google.com/p/twitter-api/issues/list
Change your membership to this group: 
http://groups.google.com/group/twitter-development-talk?hl=en


[twitter-dev] Re: Get resolved URLs?

2010-08-09 Thread Jud
Gnip's beta testing URL unwinding in all of its streams. All short
URLs that move through Gnip get unwound (one level), in real-time,
when we transform to Activity Streams. We're representing the
unwinding as follows (as an example). If you're interested in trying
this out, you can sign up for a trial at http://try.gnip.com . We have
to manually toggle the feature on for you (as it's in beta), so be
sure to email us (i...@gnip.com) w/ said request after signing up for
the trial.

gnip:urls
  gnip:url
gnip:short_urlhttp://bit.ly/aC7YVr/gnip:short_url
gnip:long_urlhttp://www.twitlonger.com/show/
1e65c74b49302a0afd8580561e63456a/gnip:long_url
  /gnip:url
/gnip:urls

On Aug 6, 10:35 am, Brian Medendorp brian.medend...@gmail.com wrote:
 I can see that twitter itself must be resolving any shortenedURLs
 somewhere, because if you search for a domain name (such as
 amazon.com), you get a bunch of results that don't seem to match until
 you resolve the shortened URL in the tweet and see that it points to
 the domain you searched for, which is fantastic!

 However, I am wondering if there is any way to get thoseresolvedURLs
 from the API, or (better yet) if there is anyway that thoseURLscould
 be exposed in the search results themselves. Currently, I am resolving
 theURLsmyself by requesting the URL and saving the resulting
 location, but that starts to take a while when there are a lot of
 results returned.


[twitter-dev] User Stream's API usage

2010-04-14 Thread Jud
I'm in the chrip conference IP address range, but
http://chirpstream.twitter.com/2b/user.json usage isn't clear.

- the follow predicate in a POST doesn't work (should it?)
- track as a predicate gets accepted, but no data comes through (I get
a single '{friends:[]}', but that's it)
- am I supposed to be tracking userids or names or keywords?

is the resource simply not turned on until later at/on the hackathon's
network?


-- 
To unsubscribe, reply using remove me as the subject.


[twitter-dev] Re: User Stream's API usage

2010-04-14 Thread Jud
On Apr 14, 7:17 pm, John Kalucki j...@twitter.com wrote:
 Email me your account name.
done
 You are in, but not getting data. Also, is this account following anyone?
it is not


-- 
To unsubscribe, reply using remove me as the subject.


[twitter-dev] Re: Annotation details

2010-04-14 Thread Jud
On Apr 14, 5:05 pm, James Teters jtet...@gmail.com wrote:
 Any ideas on size limitations or restrictions for this meta data?
good question; I have the same one.

simple math based on average tweet status byte size (of status
structure coming through the streaming or REST interface) tells us
that it wouldn't take much being jammed into the annotation's field to
double that size. what status size increase is Twitter's
infrastructure ready/willing to tolerate?

it seems to me that a few things are NOT candidates for the
annotations field(s):
- void * (for you old schoolers on the list)
- media who's original native format is binary (e.g. photos/videos)

annotations will need limitations like:
- overall size
- if key/value pairs become the model... they'll need individual size
limitations (for name and value)
- max number of pairs
- etc.

the whole thing feels driven by the answer to the original size
question.

another question would be whether or not the tweet originator can
remove annotations that others put on their tweet? I'd assume that I'd
have control over my original tweet in that manner (e.g. notes
functionality on Flickr)


-- 
To unsubscribe, reply using remove me as the subject.


[twitter-dev] stream heartbeat/keep-alives

2010-03-19 Thread Jud
the twitter streaming api docs say Parsers must be tolerant of
occasional extra newline characters placed between statuses. These
characters are placed as periodic keep-alive messages, should the
stream of statuses temporarily pause. These keep-alives allow clients
and NAT firewalls to determine that the connection is indeed still
valid during low volume periods.

that's all well and good, but I'd like some clarification on some
behavior I'm seeing. I never see newlines come through alone... rather
I always see CRLF (carriage return + linefeed; adjacent) pairs (two
chars) come through on 30 second intervals to keep-alive.

as a result, I've built my parser to consider the combination CRLF as
the heartbeat. should I be doing this, or am I missing something along
the way in which I should truly only ever be looking for LFs?

To unsubscribe from this group, send email to 
twitter-development-talk+unsubscribegooglegroups.com or reply to this email 
with the words REMOVE ME as the subject.


[twitter-dev] Migrating to Twitter Streaming API

2010-02-15 Thread Jud
Here at gnip we get a lot of people who ask us if they should move to
the streaming api and we talk to a lot of people who we politely
suggest that they move to the twitter streaming api.  we've written up
a primer on the whole process that covers most of what we've learned.
would love to get some feedback from the dev community on other tips
or suggestions that you guys have learned.

http://blog.gnip.com/2010/02/15/migrating-to-the-twitter-streaming-api-a-primer/

Jud


[twitter-dev] broken refresh link in search api results?

2009-11-03 Thread Jud

I used to be able to grab the refresh link out of an xml document
returned from a query like http://search.twitter.com/search.atom?q=iphone
. however, now, after two iterations of grabbing the refresh link, I
get 403s back from search.twitter.com. the refresh link appears to be
broken/poorly encoded.

steps to reproduce via curl:
request 1: http://search.twitter.com/search.atom?q=iphone
- works fine

request 2: 
http://search.twitter.com/search.atom?q=iphoneamp;since_id=5393893759
- this url came from the refresh link within response body's xml
document returned in request 1.
- works fine

request 3: 
http://search.twitter.com/search.atom?amp%3Bsince_id=5393893759amp;q=iphoneamp;since_id=5393907862
- this url came from the refresh link within response body's xml
document returned in request 2.
- results in a 403 from the server.