Re: [twitter-dev] Getting since date or since_id is too old Error code 403 when trying to use search API with since_id

2011-05-09 Thread Mark Linsey
since_id values are tweet ID's, not unix timestamps.

On Mon, May 9, 2011 at 8:47 PM, Abhishek Jain abhishek2j...@gmail.comwrote:

 Hi Matt, thanks for the quick response:


 http://search.twitter.com/search.atom?lang=enrpp=100result_type=mixedq=%22Hell+To+Pay%22+Neal+Hallsince_id=
 *1304633831*

 However unix time 1304633831 stands for May 5th is that too old for the
 twitter API?

 Cheers,
 Abhi


 On Mon, May 9, 2011 at 8:42 PM, Matt Harris thematthar...@twitter.comwrote:

 Hi Abhi,

 The ID you are passing is 1304977102 which is really old. If you are not
 sure which since_id to use you should omit it from your query. The response
 from the API will then include the oldest since_id available.

 Remember the Search API only stores the last 7 days worth of Tweets.
 Anything created more than a week ago will not be available through the
 Search API.

 You can learn more about Search on our developer resources page:
 http://dev.twitter.com/doc/get/search
 and
 http://dev.twitter.com/pages/using_search

 Best,
 @themattharris
 Developer Advocate, Twitter
 http://twitter.com/themattharris



 On Mon, May 9, 2011 at 8:34 PM, Abhi abhishek2j...@gmail.com wrote:

 sample url :
 http://search.twitter.com/search.atom?lang=enrpp=100result_type=mixedq=%22Geometry+Essentials+For+Dummies%22+Mark+Ryansince_id=1304977102

 all my twitter search api calls (with since_id in them ) are
 returning since date or since_id is too old

 --
 Twitter developer documentation and resources:
 https://dev.twitter.com/doc
 API updates via Twitter: https://twitter.com/twitterapi
 Issues/Enhancements Tracker:
 https://code.google.com/p/twitter-api/issues/list
 Change your membership to this group:
 https://groups.google.com/forum/#!forum/twitter-development-talk


  --
 Twitter developer documentation and resources:
 https://dev.twitter.com/doc
 API updates via Twitter: https://twitter.com/twitterapi
 Issues/Enhancements Tracker:
 https://code.google.com/p/twitter-api/issues/list
 Change your membership to this group:
 https://groups.google.com/forum/#!forum/twitter-development-talk




 --
 cheers,
 Abhishek jain

 --
 Twitter developer documentation and resources: https://dev.twitter.com/doc
 API updates via Twitter: https://twitter.com/twitterapi
 Issues/Enhancements Tracker:
 https://code.google.com/p/twitter-api/issues/list
 Change your membership to this group:
 https://groups.google.com/forum/#!forum/twitter-development-talk


-- 
Twitter developer documentation and resources: https://dev.twitter.com/doc
API updates via Twitter: https://twitter.com/twitterapi
Issues/Enhancements Tracker: https://code.google.com/p/twitter-api/issues/list
Change your membership to this group: 
https://groups.google.com/forum/#!forum/twitter-development-talk


[twitter-dev] max_id being added to search queries?

2010-07-19 Thread Mark Linsey
Hi all,

I am using the Twitter Search API.  I am finding today that even though I am
not providing a max_id or a since_id, both are being added.  I am getting a
response warning saying that since_id has been adjusted.  I assume max_id is
being adjusted for the same reason.

I am fetching this url:
http://search.twitter.com/search.json?q=qwerpoiurpp=100page=1result_type=recentlang=en

and getting this response:
{results:[],max_id:18949422775,since_id:18903514913,refresh_url:?since_id=18949422775q=qwerpoiu,results_per_page:100,page:1,completed_in:0.020541,warning:adjusted
since_id to 18903514913 due to temporary error,query:qwerpoiu}

I was expecting to get this tweet, which does appear on search.twitter.com:
http://twitter.com/mjltest/statuses/18953620545

I assume that these are extra limits put in place today to deal with other
issues in the Twitter API.  Is there any way of getting recent search
results now?

-Mark


[twitter-dev] Streaming API: Why not Curl?

2010-06-30 Thread Mark Linsey
I am just getting started with the streaming API, and I was a bit
puzzled by this line in the documentation:

While a client can be built around cycling connections, perhaps using
curl for transport, the overall reliability will tend to be poor due
to operational gotchas. Save curl for debugging and build upon a
persistent process that does not require periodic reconnections.

I am not at all familiar with the internals of how libcurl works, so
maybe I'm missing something quite obvious, but can't curl/libcurl keep
a persistent connection? Why does using it require periodic
reconnections? In fact, many examples around the web of how to consume
the streaming API seem to use libcurl. (I'm using Python so have been
looking at this example in particular:
http://arstechnica.com/open-source/guides/2010/04/tutorial-use-twitters-new-real-time-stream-api-in-python.ars)