Re: [twitter-dev] Getting invalid timestamps from search API

2011-07-09 Thread Jeffrey Greenberg
I've been seeing this too... and our code has been stable for months...  not
sure what it's about... because we also have seen an increase in (though
still very few of)  non-parseable messages... we obtain tweets via xml...

www.tweettronics.com



On Sat, Jul 9, 2011 at 2:37 PM, Doza  wrote:

> Hi everybody,
>
> My application queries the search API periodically throughout the
> day.  I'm still working on a solution using the streaming API, but
> this has been working fairly well for my needs.
>
> About 4 days ago I started getting errors from the Python library that
> I use to perform the queries (tweepy.)  It appears that some results
> contain what appear to be invalid timestamps.  Here is a sample of the
> times that I see for some tweets:
>
> ValueError: time data u'Thu, 07 Jul 2011 24:58:43 +' does not
> match format '%a, %d %b %Y %H:%M:%S +'
> ValueError: time data u'Thu, 07 Jul 2011 24:59:45 +' does not
> match format '%a, %d %b %Y %H:%M:%S +'
> ValueError: time data u'Fri, 08 Jul 2011 24:58:03 +' does not
> match format '%a, %d %b %Y %H:%M:%S +'
> ValueError: time data u'Fri, 08 Jul 2011 24:33:41 +' does not
> match format '%a, %d %b %Y %H:%M:%S +'
>
> These timestamps all have 24 as the hour, which doesn't seem correct.
> Is anybody else seeing the same thing?
>
> Thanks,
> Mike
>
> --
> Twitter developer documentation and resources: https://dev.twitter.com/doc
> API updates via Twitter: https://twitter.com/twitterapi
> Issues/Enhancements Tracker:
> https://code.google.com/p/twitter-api/issues/list
> Change your membership to this group:
> https://groups.google.com/forum/#!forum/twitter-development-talk
>

-- 
Twitter developer documentation and resources: https://dev.twitter.com/doc
API updates via Twitter: https://twitter.com/twitterapi
Issues/Enhancements Tracker: https://code.google.com/p/twitter-api/issues/list
Change your membership to this group: 
https://groups.google.com/forum/#!forum/twitter-development-talk


Re: [twitter-dev] Re: TweetDeck joins Twitter

2011-05-27 Thread Jeffrey Greenberg
I've seen this movie before and here's my unsolicited advice to other
developers. Back in the day, the Microsoft Desktop was an application that
you could replace. So I created one of the earliest Windows interfaces with
icons and drag and drop (called Aporia and later WinTools). Soon other
players joined the fray of replacing and improving the Windows desktop (i.e.
Windows versions 2 - 3.1) including heavy hitters like HP, Xerox, early
Symantec, etc.  Then at some point, Microsoft decided the desktop was
theirs, and everybody had to get out of the pool.  All of our products were
crushed, and many of us went out of business.  A couple of products had
already merged into general utilities offerings, one of which ultimately
morphed into the Symantec you know today

So I've seen this movie before, where you play in someone else's pool.  It
can be very sweet but only if you're early enough or if you don't depend
entirely on Microsoft, I mean Twitter . It's very very high risk, and there
can be some reward.  One of those rewards may not be money but the
experience of playing this game. It could be expensive learning but it's
learning nonetheless.  But also, just to say it, creating startup of any
kind is very very risky.  Just know what you're getting into.

So as I see it the low hanging Twitter fruit is pretty much entirely done
now.  The odds of a small developer being able to create something that
isn't already being funded and worked on is pretty close to zero.  So you'll
either have to do something radically new related to Twitter, or use Twitter
as just a part of your offering.

jeffrey greenberg
http://www.jeffrey-greenberg.com
http://www.tweettronics.com

On Fri, May 27, 2011 at 1:42 AM, Tammy Fennell 
wrote:
> Hey,
>
> I think Ernandes is right. There is tons of room for innovation. I
> started a linkedin group (all the other ones I found seemed a little
> stale and disused)
> http://www.linkedin.com/groups/Twitter-3rd-Party-Developers-3928159
>
> I'd love to see more developers in there so we can keep pushing
> forward a positive ecosystem. @TheMattHarris @RyanSarver or anyone at
> Twitter, I'd love if you popped in too. Linkedin groups are pretty
> effective for these sorts of things.
>
> Have a nice day everyone,
>
> ~Tammy
>
> On May 26, 3:44 pm, "Ernandes Jr."  wrote:
>> Twitter is trying to defend himself of becoming an hostage, since most
users
>> use these apps instead of the website. With such huge users base,
TweetDeck
>> along with other app, e.g., Echofon, would have enough audience to create
a
>> competitor to Twitter eventually.
>>
>> Anyway, there is also room for innovation. Let's put our head to work.
Who
>> knows our app becomes the next big purchase by Twitter. :D
>>
>> On Wed, May 25, 2011 at 2:02 PM, Felipe Knorr Kuhn 
wrote:
>>
>> > I just wish Twitter would focus on the features the users want, rather
than
>> > spending money buying Twitter clients.
>>
>> > Some are really simple like raising the number of lists you can create,
an
>> > url shortener on the web interface, tweet to groups, etc.
>>
>> > Or the really useful feature that is searching for tweets older than
one
>> > week.
>>
>> > But oh well, you can't demand features from a free service, anyway.
>>
>> > FK
>>
>> > On Wed, May 25, 2011 at 12:34 PM, Matt Harris <
thematthar...@twitter.com>wrote:
>>
>> >> Hey everyone,
>>
>> >> Today we announced on the Twitter Blog (
>> >>http://blog.twitter.com/2011/05/all-decked-out.html) that the TweetDeck
>> >> team has joined Twitter.
>>
>> >> When Tweetie became part of the Twitter family the user growth was
huge,
>> >> creating more opportunities for developers to build applications for
the
>> >> growing audience. With TweetDeck now joining us we expect to see even
more
>> >> opportunities become available to you and look forward to seeing what
you
>> >> create.
>>
>> >> TweetDeck is a powerful platform for brands, publishers and advanced
>> >> Twitter users, and we’re really excited that Iain and his team are
joining
>> >> us. We’re looking forward to working with them as we invest and
support the
>> >> TweetDeck that you all are familiar with.
>>
>> >> Best
>> >> @themattharris
>>
>> >> --
>> >> Twitter developer documentation and resources:
>> >>https://dev.twitter.com/doc
>> >> API updates via Twitter:https://twitter.com/twitterapi
>> >> Issues/Enhancements Tracker:
>> >>https://code

Re: [twitter-dev] Re: Search API rate limit change?

2011-03-21 Thread Jeffrey Greenberg
Taylor,
Yeah this was definitely NOT good.In the past, when there is a
service disruption, your api group would post something on your status
page and tweet about it... Instead, I'm finding out about this from my
customers...

Did y'all tweet about this or present this somewhere where I could find it?

Jeffrey
Tweettronics.com

On Sun, Mar 20, 2011 at 3:14 PM, Waldron Faulkner
 wrote:
> Without prior notice, I can understand (circumstances), but without
> any kind of subsequent announcement?? Means we have to discover issues
> ourselves, verify that they're Twitter related (and not internal),
> then search around for existing discussion on the topic. Saves us a
> lot of time and headaches if Twitter would just announce stuff like
> this.
>
> On Mar 18, 2:51 pm, Taylor Singletary 
> wrote:
>> We're working to reinstate the usual limits on the Search API; due to the
>> impact of the Japanese earthquake and resultant query increase against the
>> Search API, some rates were adjusted to cope & better serve queries. Will
>> give everyone an update with the various limits are adjusted.
>>
>> @episod  - Taylor Singletary - Twitter Developer
>> Advocate
>>
>> On Fri, Mar 18, 2011 at 11:39 AM, Hayes Davis  wrote:
>> > Hi,
>>
>> > We're seeing this as well starting at approximately the same time as
>> > described. We've backed off on searching but are seeing no reduction in the
>> > sporadic limiting. It also appears that the amount of results returned on
>> > successful queries is severely limited. Some queries that often have 1500
>> > tweets from the last 5 days are returning far fewer results from only the
>> > last day.
>>
>> > Could we get an update on this?
>>
>> > Hayes
>>
>> > On Fri, Mar 18, 2011 at 10:13 AM, Eric  wrote:
>>
>> >> We're also seeing 400s on different boxes across different IP
>> >> addresses with different queries (so it does not appear to be server
>> >> or query specific). These began on all boxes at 2 a.m. UTC. We've
>> >> backed off on both number and rate of queries with no effect. We've
>> >> also noticed an increase in sporadic fail whales via browser based
>> >> search (atom and html) from personal accounts, although we haven't
>> >> attempted to quantify it.
>>
>> >> On Mar 18, 7:40 am, zaver  wrote:
>> >> > Hello,
>>
>> >> > After the latest performance issues with the search api i have been
>> >> > seeing a lot of 420 response codes.From yesterday until now i only get
>> >> > 420 responses on the every search i make. In particular, i search for
>> >> > about 100 keywords simultaneously every 6 mins. Why is this happening?
>> >> > Was there any change on the Search API limit?
>>
>> >> > Any help is greatly appreciated.
>>
>> >> > Thanks,
>> >> > Zaver
>>
>> >> --
>> >> Twitter developer documentation and resources:http://dev.twitter.com/doc
>> >> API updates via Twitter:http://twitter.com/twitterapi
>> >> Issues/Enhancements Tracker:
>> >>http://code.google.com/p/twitter-api/issues/list
>> >> Change your membership to this group:
>> >>http://groups.google.com/group/twitter-development-talk
>>
>> >  --
>> > Twitter developer documentation and resources:http://dev.twitter.com/doc
>> > API updates via Twitter:http://twitter.com/twitterapi
>> > Issues/Enhancements Tracker:
>> >http://code.google.com/p/twitter-api/issues/list
>> > Change your membership to this group:
>> >http://groups.google.com/group/twitter-development-talk
>
> --
> Twitter developer documentation and resources: http://dev.twitter.com/doc
> API updates via Twitter: http://twitter.com/twitterapi
> Issues/Enhancements Tracker: http://code.google.com/p/twitter-api/issues/list
> Change your membership to this group: 
> http://groups.google.com/group/twitter-development-talk
>

-- 
Twitter developer documentation and resources: http://dev.twitter.com/doc
API updates via Twitter: http://twitter.com/twitterapi
Issues/Enhancements Tracker: http://code.google.com/p/twitter-api/issues/list
Change your membership to this group: 
http://groups.google.com/group/twitter-development-talk


[twitter-dev] Re: Twitter Search bug: confusing Swedish place "Åre" with English verb "are"

2011-03-12 Thread Jeffrey Greenberg
Any response on this from twitter?

Sent from my iPhone

On Mar 3, 2011, at 1:46 PM, Jeffrey Greenberg  
wrote:

> Hi,
> We have a customer who is trying to find tweets with the Swedish word
> "Åre", which is a place, and is getting tweets with the English word
> "are" in it.
> 
> I've reproduced this on search.twitter.com ...  If the search is "Åre
> -are" it return nothing, because it seems that search.twitter.com sees
> them as the same word.
> 
> Can this get addressed?  And/or is there a workaround (not involving
> streams)?  (fyi: Geographic/location related search will not work in
> this situation).
> 
> Thanks,
> jeffrey greenberg
> www.tweettronics.com

-- 
Twitter developer documentation and resources: http://dev.twitter.com/doc
API updates via Twitter: http://twitter.com/twitterapi
Issues/Enhancements Tracker: http://code.google.com/p/twitter-api/issues/list
Change your membership to this group: 
http://groups.google.com/group/twitter-development-talk


[twitter-dev] Twitter Search bug: confusing Swedish place "Åre" with English verb "are"

2011-03-03 Thread Jeffrey Greenberg
Hi,
We have a customer who is trying to find tweets with the Swedish word
"Åre", which is a place, and is getting tweets with the English word
"are" in it.

I've reproduced this on search.twitter.com ...  If the search is "Åre
-are" it return nothing, because it seems that search.twitter.com sees
them as the same word.

Can this get addressed?  And/or is there a workaround (not involving
streams)?  (fyi: Geographic/location related search will not work in
this situation).

Thanks,
jeffrey greenberg
www.tweettronics.com

-- 
Twitter developer documentation and resources: http://dev.twitter.com/doc
API updates via Twitter: http://twitter.com/twitterapi
Issues/Enhancements Tracker: http://code.google.com/p/twitter-api/issues/list
Change your membership to this group: 
http://groups.google.com/group/twitter-development-talk


[twitter-dev] Garbled http headers and XML payloads - addressed?

2010-07-20 Thread Jeffrey Greenberg
Has this issue been addressed?

We've seen a huge increase in the last 36 hours and it's affecting
us... Just to say it we started seeing these once or twice a day since
the world cup ended... And then this nasty spike...

Jeffrey Greenberg
Tweettronics.com


Re: [twitter-dev] t.co issue -- querying for original url in streaming & search apis

2010-06-09 Thread Jeffrey Greenberg
Just to say it, this matching of "actual" URL as well as the  
shortened, supplied URL has been regarded as a bug by our users; it  
confuses them.  I would prefer it if it were optional to search so  
that I could turn it off... They only want to match the literal  
text... We provide means for them to deal with the actual/final URL by  
others means and in a different context.


The entire t.co idea is not really of use to us as a monitoring service.


Sent from my iPhone

On Jun 9, 2010, at 2:46 PM, Abraham Williams <4bra...@gmail.com> wrote:

Currently Search (and probably Streaming) returns results that match  
text in unshortened URLs not just in the text of the status. I doubt  
It would change now especially with t.co coming.


Abraham
-
Abraham Williams | Developer for hire | http://abrah.am
@abraham | http://projects.abrah.am | http://blog.abrah.am
This email is: [ ] shareable [x] ask first [ ] private.


On Wed, Jun 9, 2010 at 12:13, Jim Gilliam  wrote:
I'm creating a new thread for this because a few others have  
mentioned it, and we haven't gotten a response yet.  My hunch is  
that changing those APIs involve other teams within Twitter, so  
figuring out a solution could be challenging.


Here is the issue.  We need to be able to get matches on the  
original URL through the streaming and search APIs.   For me, I'm  
tracking "act" so I can match tweets that link to 'http://act.ly'.   
This is not a link shortener service, the actual pages live at  
act.ly, and it was all designed specifically for Twitter so there  
would be no need for url shorteners.


As far as I'm concerned, it's fine if that link changes to t.co, as  
long as I can still get matches on act.ly (or act) through the  
streaming API (the search API is going to be important for people  
too, but less of an issue for me personally).


The most elegant way to fix this would be to allow tracking of the  
original URL.  So I can put in a domain name, or URL substring, and  
match everything that way.  Same with search. This would be useful  
to a lot of people, and virtually all link oriented web apps with  
APIs provide a way to get all the matches for a particular domain.  
(digg, google, yahoo, etc)


I'm sure there are other workaround ways of doing this, and I'm all  
ears.  It would be SUPER NICE (wink wink) to hear some kind of  
assurance that there will be a way for us to query this type of  
information before the t.co changes go live.


Thanks guys...

Jim Gilliam
http://act.ly/
http://twitter.com/jgilliam

On Tue, Jun 8, 2010 at 4:43 PM, Jim Gilliam  wrote:
Will we be able to get matches on the original URL through the  
streaming API?


For example, I'm tracking "act" so I can match tweets that link to 'http://act.ly' 
.  Will I still be able to do that?


Jim Gilliam
http://act.ly/
http://twitter.com/jgilliam


On Tue, Jun 8, 2010 at 4:33 PM, Dewald Pretorius   
wrote:

Raffi,

I'm fine with everything up to the new 140 character count.

If you count the characters *after* link wrapping, you are seriously
going to mess up my system. My short URLs are currently 18 characters
long, and they will be 18 long for quite some time to come. After that
they will be 19 for a very long time to come.

If you implement this change, a ton, and I mean a *huge* number of my
system's updates are going to be rejected for being over 140
characters.

On Jun 8, 7:57 pm, Raffi Krikorian  wrote:
> hi all.
>
> twitter has been wrapping links in e-mailed DMs for a couple months
> now.
> with that feature, we're trying to protect users against phishing  
and other
> malicious attacks. the way that we're doing this is that any URL  
that comes
> through in a DM gets currently wrapped with a twt.tl URL -- if the  
URL turns
> out to be malicious, Twitter can simply shut it down, and whoever  
follows
> that link will be presented with a page that warns them of  
potentially
> malicious content. in a few weeks, we're going to start slowly  
enabling this
> throughout the API for all statuses as well, but instead of  
twt.tl, we will

> be using t.co.
>
> practically, any tweet that is sent through statuses/update that  
has a link
> on it will have the link automatically converted to a t.co link on  
its way
> through the Twitter platform. if you fetch any tweet created after  
this
> change goes live, then its text field will have all its links  
automatically
> wrapped with t.co links. when a user clicks on that link, Twitter  
will
> redirect them to the original URL after first confirming with our  
database
> that that URL is not malicious.  on top of the end-user benefit,  
we hope to
> eventually provide all developers with aggregate usage data around  
your
> applications such as the number of clicks people make on URLs you  
display

> (it will, of course, be in aggregate and not identifiable manner).
> additionally, we want to be able to build services and APIs that  
can make

> algorithmic re

[twitter-dev] Re: Search API: searching for "Don" and finding "don't" instead

2010-06-07 Thread Jeffrey Greenberg
Thanks Matt,

Unless they've been updated lately, the docs are not clear as to how
to handle contractions, so thanks for the -don't example.

Given that "don't" is regarded as a "word", we believe that search
should _not_ return "don't" in a search for "don"... It's a bug in our
opinion.

Further, I'm not sure whether this is a problem only with contractions
(that is the handling of single-quote characters), or if search reacts
in weird/inconsistent/buggy ways when other special characters  (e.g.
single quotes, utf-8 stuff, etc) are used.  Can you check whether
there is consistent handling and spec for these from the search team?

Thanks,
Jeffrey
http://www.tweettronics.com

On Jun 7, 10:50 am, themattharris  wrote:
> Hi Jeffrey,
>
> Thanks for bumping this to our attention. Some of the threads fall off
> our radar so a prompt is always welcome.
>
> Search treats separate words as an AND search meaning a search for:
>   Don SomeLastName
> will translate to:
>   Don AND SomeLastName.
>
> For a complete phrase search you would instead want to search for:
>   "Don SomeLastName".
>
> The problem you are experiencing with "Don" matching "Don't" is, as
> you suggested, managed by appending "-don't" to the query. You don't
> need to escape the apostrophe and the quotes are not necessary, making
> your search query:
>   Don SomeLastName -don't
>
> You can read more about the supported advanced search operators on the
> search site [1].
>
> Hope that helps,
>
> Matt Harris
> Developer Advocate, Twitterhttp://twitter.com/themattharris
>
> 1.http://search.twitter.com/operators
>
> On Jun 7, 9:09 am, Jeffrey Greenberg 
> wrote:
>
>
>
> > Hello Twitter,
> > Anyone home?
> > j
>
> > On Jun 2, 11:28 pm, Jeffrey Greenberg 
> > wrote:
>
> > > We have a user that is causing us to create a search of the form:
> > >    Don SomeLastName
> > > which is returning tweets containing "don't" and SomeLastName.
>
> > > Thats a no good!
>
> > > Is there a decent workaround for this by modifying the search? e.g.
> > >     Don SomeLastName -don't
> > > but how do you escape the single quote?  Like this?
> > >     Don SomeLastName -"don't"


[twitter-dev] Re: Search API: searching for "Don" and finding "don't" instead

2010-06-07 Thread Jeffrey Greenberg
Hello Twitter,
Anyone home?
j

On Jun 2, 11:28 pm, Jeffrey Greenberg 
wrote:
> We have a user that is causing us to create a search of the form:
>    Don SomeLastName
> which is returning tweets containing "don't" and SomeLastName.
>
> Thats a no good!
>
> Is there a decent workaround for this by modifying the search? e.g.
>     Don SomeLastName -don't
> but how do you escape the single quote?  Like this?
>     Don SomeLastName -"don't"


[twitter-dev] Search API: searching for "Don" and finding "don't" instead

2010-06-02 Thread Jeffrey Greenberg
We have a user that is causing us to create a search of the form:
   Don SomeLastName
which is returning tweets containing "don't" and SomeLastName.

Thats a no good!

Is there a decent workaround for this by modifying the search? e.g.
Don SomeLastName -don't
but how do you escape the single quote?  Like this?
Don SomeLastName -"don't"


[twitter-dev] search.twitter.com 420 errors today when we did not get them before

2010-05-27 Thread Jeffrey Greenberg
We are seeing 420 errors on our account right now, today.  We have
not seen them hardly at all before... We have a pause in our search
cycle with Twitter that has been sufficient to not cause us to exceed
our
whitelist allocation.  Have you revised downward the whitelist limits
in general or for us in particular?  We have been using this api for
more than 18months without  much problem.

Our production app is using basic auth: is that a factor?  (We are
going to switch to oauth later today).

This change of behavior is impactng our business.


@JeffGreenberg
jeffrey greenberg
tweettronics.com


[twitter-dev] Re: Twitter Search request failed. error 420

2010-05-27 Thread Jeffrey Greenberg
We are seeing 420 errors on our account right now, today when we have
not seeing them before...We have a pause in our search cycle with
Twitter that has been sufficient to not cause us to exceed our
whitelist allocation.  Have you revised downward the whitelist
limits?  We have been using this api for more than 18months without
much problem, but right now we are, and it's impactng our business.

@JeffGreenberg

jeffrey greenberg
tweettronics.com

On May 27, 7:18 am, Jonathan Reichhold 
wrote:
> 420 is a rate limit.  The actual error message in the response does state
> this.
>
> Requesting things more than every 20 seconds will not help your results be
> any fresher.
>
> Jonathan
>
>
>
> On Thu, May 27, 2010 at 6:29 AM, Karolis  wrote:
> > Hello
>
> > i was sending this request via php: status:http://search.twitter.com/
> > search.json?geocode=55.6762944,12.5681157,10mi&rpp=100<http://search.twitter.com/%0Asearch.json?geocode=55.6762944,12.568115...>
>
> > the reply i get is: 420 unused. failed to open stream: HTTP request
> > failed! HTTP/1.1
>
> > Is this problem with my code or reliability of api?


[twitter-dev] Search api returning results based on walking shortened URLS: causing problems.

2010-05-26 Thread Jeffrey Greenberg
So we have customer that is searching, for example, for hotels.com.
So we use the search api and we get from Twitter a tweet that has no
such text in it, but it turns out that the shortened URL contains the
string 'hotels.com':

Here's the tweet:
Siam Bayview Hotel Pattaya, Beach Rd. from THB 2,010 incl
breakfast Special Rate http://bit.ly/295HOI Thailand hotels
He're the walked bit.ly url:
   http://www.r24.org/patong-beach-hotels.com/pattaya/siambayview/

In this case, this match isn't good.  They don't want r24.org stuff,
they want hotels.com stuff...  On the other hand, it's great when it
really shows hotels.com stuff..

I'm not sure what the 'right" thing to do is at this moment, as I'm
reacting to the customer's urgency and problem in getting unrelated
stuff showing up in their search...

I'm not sure how I should address this:
1. recommend that twitter do some sort of mod to the search api  ( I
don't have a good idea at the moment about what you should do: make
such url walking optional?  etc?)
2. do some sort of processing on our end, and communicating about
better about what search does to our customers

So:
a. What's ya'll thoughts on this one?

b. I believe that you (twitter) walk some shorteners but not all of
them: e.g. bit.ly urls and your own shortener   What is the current
list that you do walk?

This is related to entity parsing discussion here:
http://groups.google.com/group/twitter-development-talk/browse_thread/thread/9b869a9fe4d4252e/861a2aa59b563f33?lnk=gst&q=search+url#861a2aa59b563f33

Thanks,
Jeffrey Greenberg
tweettronics.com


[twitter-dev] search api :slowdown or throttling?

2010-05-20 Thread Jeffrey Greenberg
Our app uses the search api extensively and we've noticed that the
response time has fallen dramatically for aggregates of search
requests in the past days . Is that really the case?

Our production app is using basic auth at the moment, and we're
wondering if that's a factor in this?


jeffrey greenberg


Re: [twitter-dev] Logical AND supported in streaming API filter endpoint

2010-04-23 Thread Jeffrey Greenberg
When will we get - aka "not"?

On Monday, April 19, 2010, Mark McBride  wrote:
> To date the streaming API has only supported logical OR in track
> keywords (http://apiwiki.twitter.com/Streaming-API-Documentation#track).
>  Today we're happy to announce that we support logical ANDing in
> production as well.
>
> The track parameter is treated as a series of phrases.  Phrases are
> separated by commas. Words within phrases are delimited by spaces. A
> tweet matches if any phrase matches. A phrase matches if all of the
> words are present in the tweet. (e.g. 'the twitter' is 'the' AND
> 'twitter', and 'the,twitter' is 'the' OR 'twitter'.).  Some
> examples...
> 1) "twitter api,twitter streaming"
> (http://stream.twitter.com/1/statuses/filter.xml?track=twitter+api%2Ctwitter+streaming)
> will match the tweets "The Twitter API is awesome" and "The twitter
> streaming deal is fast", but not "I'm new to Twitter"
> 2) The same approach to dealing with case, punctuation, @replies and
> hashtags still applies.  So "chirp search,chirp streaming"
> (http://stream.twitter.com/1statuses/filter.xml?track=chirp+search%2Cchirp+streaming)
> will match "Listening to the @chirp talk on search", "I'm at Chirp
> talking about search!", and "loving this search talk #chirp"
>
> This should dramatically close the gap on what you can do with the
> search API but not with streaming, and also reduce the amount of data
> users have to consumer to match on multiple keywords.
> Comments/questions welcome as always.
>
>   ---Mark
>
> http://twitter.com/mccv
>
>
> --
> Subscription settings: 
> http://groups.google.com/group/twitter-development-talk/subscribe?hl=en
>


Re: [twitter-dev] Announcing Twurl: OAuth-enabled curl for the Twitter API

2010-04-20 Thread Jeffrey Greenberg
I'm already a whitelisted app (Tweettronics.com) and do not want  
access downgraded. I'm concerned that switching to oauth and  
"registering" my app at dev might cause my whitelisting status to  
change.   Can you assure me that won't happen?

Thx

Sent from my iPhone

On Apr 20, 2010, at 12:38 PM, "Dean Collins"  wrote:

Great so you are moving before oauth 2 is finished. You guys are  
crazy. You’re making everyone change now and then change again in 3  
months.





Cheers,
Dean

From: twitter-development-talk@googlegroups.com [mailto:twitter- 
development-t...@googlegroups.com] On Behalf Of Marcel Molina

Sent: Tuesday, April 20, 2010 3:13 PM
To: twitter-development-talk@googlegroups.com; 
twitter-api-annou...@googlegroups.com
Subject: [twitter-dev] Announcing Twurl: OAuth-enabled curl for the  
Twitter API


We've announced that come June 2010, Basic Auth will no longer be  
supported via the Twitter API. All authenticated requests will be  
moving to OAuth (either version 1.0a or the emerging 2.0 spec).  
There are many benefits from this change. Aside from the obvious  
security improvements, having all requests be signed with OAuth  
gives us far better visibility into our traffic and allows us many  
more tools for controlling and limiting abuse. When we know and  
trust the origin of our traffic we can loosen the reigns a lot and  
trust by default. We've already made a move in this direction by  
automatically increasing rate limits for requests signed with OAuth  
made to the new versioned api.twitter.com host.


One of the often cited virtues of the Twitter API is its simplicity.  
All you have to do to poke around at the API is curl, for example, http://api.twitter.com/1/users/noradio.xml 
 and you're off and running. When you require that OAuth be added to  
the mix, you risk losing the simplicity and low barrier to entry  
that curl affords you. We want to preserve this simplicity. So we've  
provided two tools to let you poke around at the API without having  
to fuss with all the extraneous details of OAuth. For those who want  
the ease of the web, we've already included an API console in our  
new developer portal at http://dev.twitter.com/console. And now  
today we're glad to make available the Twurl command line utility as  
open source software:


  http://github.com/marcel/twurl

If you already have RubyGems (http://rubygems.org/), you can install  
it with the gem command:


  sudo gem i twurl --source http://rubygems.org

If you don't have RubyGems but you have Rake (http://rake.rubyforge.org/ 
), you can install it "from source". Check out the INSTALL file (http://github.com/marcel/twurl/blob/master/INSTALL 
).


Once you've got it installed, start off by checking out the README (http://github.com/marcel/twurl/blob/master/README 
) (you can always get the README by running 'twurl -T'):


+---+
| Twurl |
+---+

Twurl is like curl, but tailored specifically for the Twitter API.
It knows how to grant an access token to a client application for
a specified user and then sign all requests with that access token.

It also provides other development and debugging conveniences such
as defining aliases for common requests, as well as support for
multiple access tokens to easily switch between different client
applications and Twitter accounts.

+-+
| Getting Started |
+-+

The first thing you have to do is register an OAuth application
to get a consumer key and secret.

  http://dev.twitter.com/apps/new

When you have your consumer key and its secret you authorize
your Twitter account to make API requests with your consumer key
and secret.

  % twurl authorize --consumer-key the_key   \
--consumer-secret the_secret

This will return an URL that you should open up in your browser.
Authenticate to Twitter, and then enter the returned PIN back into
the terminal.  Assuming all that works well, you will beauthorized
to make requests with the API. Twurl will tell you as much.

If your consumer application has xAuth enabled, then you can use
a variant of the above

  % twurl authorize -u username -p password  \
--consumer-key the_key   \
--consumer-secret the_secret

And, again assuming your username, password, key and secret is
correct, will authorize you in one step.

+-+
| Making Requests |
+-+

The simplest request just requires that you specify the path you
want to request.

  % twurl /1/statuses/home_timeline.xml

Similar to curl, a GET request is performed by default.

You can implicitly perform a POST request by passing the -d option,
which specifies POST parameters.

  % twurl -d 'status=Testing twurl' /1/statuses/update.xml

You can explicitly specify what request method to perform with
the -X (or --request-method) option.

  % twurl -X DELETE /1/statuses/destroy/123456.xml

+--+
| Creating aliases |
+--+

  %

[twitter-dev] Re: Recommended ways to demultiplex the search stream with thousands of searches

2010-04-19 Thread Jeffrey Greenberg
Just to clarify:
if i have thousands of boolean searches that map to the current search
capability, and If I want to map all or some of those into Twitter
Streaming API, I have to deal with the fact that streams don't support
boolean expressions, just direct single term matches.  So I must
either create a homebrew boolean production scheme (e.g. the regex
idea I mentioned at the start) or via a heavier weight free-text
search capability (e.g. lucene).

Is that right?

jeffrey greenberg



On Apr 19, 1:52 pm, John Kalucki  wrote:
> In brief: Take all of your search terms and put them into a HashTable
> that maps from keyword to subscriber. Tokenize each tweet's text field
> and apply each token to the HashTable, sending the Tweet on to all
> subscribers. Each subscriber can do a generational deduplication to
> avoid getting each tweet twice -- by storing the status id in the
> subscriber object.
>
> If each subscriber keeps a copy of their search terms, you can even do
> subscriber removal from the HashTable when the subscriber stops their
> query.
>
> You can tokenize multi-threaded, but do the hash table apply and hash
> table set operations in a single thread. This is plenty of concurrency
> and leads to a simple programming model -- and the easy generational
> deduplication scheme above.
>
> -John Kaluckihttp://twitter.com/jkalucki
> Infrastructure, Twitter Inc.
>
> On Mon, Apr 19, 2010 at 11:28 AM, Jeffrey Greenberg
>
>  wrote:
> > I was unable to attend Chirp in person, so I could not hear John
> > Kalucki's comments on this... Anyone have any notes on this... John?
>
> > j
>
> > On Apr 16, 3:36 pm, Jeffrey Greenberg 
> > wrote:
> >> So I'm looking at the streaming api (track), and I've got thousands of
> >> searches.  (http://tweettronics.com) I mainly need it to deal with
> >> terms that are very high volume, and to deal search api rate limiting.
>
> >> The main difficulty I'm thinking about is the best way to de-multiplex
> >> the stream back into the individual searches I'm trying to accomplish.
>
> >> 1. How do you handle if the searches are more complex than single
> >> terms, but a boolean expression... Do you convert the boolean into
> >> something like regex, and then run that regex on every tweet... So if
> >> I have several thousand regexs and thousands of tweets, that's a huge
> >> amount of processing just todemultiplex... But is that the way to go?
> >> 2 And if the search is just a simple expression, do folks 
> >> simplydemultiplexby doing a string search for each word in the search for
> >> every received tweet... like above?
>
> >> I'm looking for recommended ways todemultiplexthe search stream...
>
> >> Thanks,
> >> jeffrey greenberg
>
> >> --
> >> Subscription 
> >> settings:http://groups.google.com/group/twitter-development-talk/subscribe?hl=en


[twitter-dev] Re: Recommended ways to demultiplex the search stream with thousands of searches

2010-04-19 Thread Jeffrey Greenberg
I was unable to attend Chirp in person, so I could not hear John
Kalucki's comments on this... Anyone have any notes on this... John?

j

On Apr 16, 3:36 pm, Jeffrey Greenberg 
wrote:
> So I'm looking at the streaming api (track), and I've got thousands of
> searches.  (http://tweettronics.com) I mainly need it to deal with
> terms that are very high volume, and to deal search api rate limiting.
>
> The main difficulty I'm thinking about is the best way to de-multiplex
> the stream back into the individual searches I'm trying to accomplish.
>
> 1. How do you handle if the searches are more complex than single
> terms, but a boolean expression... Do you convert the boolean into
> something like regex, and then run that regex on every tweet... So if
> I have several thousand regexs and thousands of tweets, that's a huge
> amount of processing just todemultiplex... But is that the way to go?
> 2 And if the search is just a simple expression, do folks simplydemultiplexby 
> doing a string search for each word in the search for
> every received tweet... like above?
>
> I'm looking for recommended ways todemultiplexthe search stream...
>
> Thanks,
> jeffrey greenberg
>
> --
> Subscription 
> settings:http://groups.google.com/group/twitter-development-talk/subscribe?hl=en


[twitter-dev] Recommended ways to demultiplex the search stream with thousands of searches

2010-04-16 Thread Jeffrey Greenberg
So I'm looking at the streaming api (track), and I've got thousands of
searches.  ( http://tweettronics.com ) I mainly need it to deal with
terms that are very high volume, and to deal search api rate limiting.

The main difficulty I'm thinking about is the best way to de-multiplex
the stream back into the individual searches I'm trying to accomplish.

1. How do you handle if the searches are more complex than single
terms, but a boolean expression... Do you convert the boolean into
something like regex, and then run that regex on every tweet... So if
I have several thousand regexs and thousands of tweets, that's a huge
amount of processing just to demultiplex... But is that the way to go?
2 And if the search is just a simple expression, do folks simply
demultiplex by doing a string search for each word in the search for
every received tweet... like above?

I'm looking for recommended ways to demultiplex the search stream...

Thanks,
jeffrey greenberg


-- 
Subscription settings: 
http://groups.google.com/group/twitter-development-talk/subscribe?hl=en


[twitter-dev] Recommended ways to demultiplex the search stream with thousands of searches

2010-04-16 Thread Jeffrey Greenberg
So I'm looking at the streaming api (track), and I've got thousands of
searches.  ( http://tweettronics.com ) I mainly need it to deal with
terms that are very high volume, and to deal search api rate limiting.

The main difficulty I'm thinking about is the best way to de-multiplex
the stream back into the individual searches I'm trying to accomplish.

1. How do you handle if the searches are more complex than single
terms, but a boolean expression... Do you convert the boolean into
something like regex, and then run that regex on every tweet... So if
I have several thousand regexs and thousands of tweets, that's a huge
amount of processing just to demultiplex... But is that the way to go?
2 And if the search is just a simple expression, do folks simply
demultiplex by doing a string search for each word in the search for
every received tweet... like above?

I'm looking for recommended ways to demultiplex the search stream...

Thanks,
jeffrey greenberg


-- 
Subscription settings: 
http://groups.google.com/group/twitter-development-talk/subscribe?hl=en


[twitter-dev] Twitter Archive - Looking for an API...

2010-04-16 Thread Jeffrey Greenberg
Ok so Google has an archive and the Library of Congress has an
archive... I want to access someone's archive of Tweets via a solid,
performant, rest-ful API... any suggestions?  Twitter?

ps: I'm worried the LOC will not be performant enough, and they are
making noises like "for research use", which won't help me.

jeffrey greenberg
http://www.tweettronics.com
http://www.jeffrey-greenberg.com


-- 
Subscription settings: 
http://groups.google.com/group/twitter-development-talk/subscribe?hl=en


Re: [twitter-dev] Re: Fred Wilson article on Twitter API

2010-04-12 Thread Jeffrey Greenberg
*I'm extremely unsettled. *I'm agreeing with Dewald Pretorius's comments
above... Here's an earlier related story: I was the first to market with a
drag & drop interface for Windows...Yes: back in the stone ages of windows
2.x and windows 3.0 there was no such thing.  And soon after HP, Xerox, and
some other companies (Norton => Symantec, Central Point) all started to play
in that arena. And after a longer while Microsoft said, sorry folks, this is
going to be our playground... and wiped us all out of there... I swore I
would not ride someone's coattails again, but I have (it's not possible to
not ride on someone's coattails)... But in this case Twitter didn't seem so
predatory, and I want to believe in the good side of "social" vs "it's just
business", and I've met Ev and some twitter folk (before twitter) and was
impressed with them as people.  So I'm extremely unsettled by the lack of
clarity on Twitter's business intent.  I would appreciate some clarity on
Twitter biz direction.  Fred Wilson's post on top of  other things that have
been said by Twitter, and the dialogs between Arrington and Loic are
extremely unsettling.  I'd rather fail quickly, than go through a long,
slow, and expensive death.

With that said, if I were in your shoes holding the cards, even if you're a
kind player, I cannot imagine doing anything much different.  And if it were
my business you were purchasing, I'd be elated, and sympathetic to everyone
else. You're more likely figuring it all out, just as we are.  I'll live
with being unsettled, but if you can clarify, it would be appreciated.

jeffrey greenberg
http://www.tweettronics.com
http://www.jeffrey-greenberg.com



On Sun, Apr 11, 2010 at 7:14 AM, Dewald Pretorius  wrote:

> You are also free and welcome to express your opinions, even when
> hiding behind a veil of anonymity.
>
> On Apr 11, 9:03 am, notinfluential  wrote:
> > Totally over-dramatic.  And way beyond annoying at this point.
> >
> > Dewald, quit your whining and either get back to coding and doing
> > something productive, or maybe you should aim your posts at this group
> > instead:
> http://groups.google.com/group/delusional-socialist-development-talk
> >
> > @notinfluential
> >
> > On Apr 10, 11:05 am, Raffi Krikorian  wrote:
> >
> >
> >
> > > > Twitter has now displayed a distinctive predatorial stance towards
> the
> > > > developer ecosystem.
> >
> > > That's incredibly overdramatic, I think. We have, and continue to
> > > maintain a platform that will allow for a vibrant ecosystem.  We want
> > > everybody to succeed.- Hide quoted text -
> >
> > - Show quoted text -
>
>
> --
> To unsubscribe, reply using "remove me" as the subject.
>


[twitter-dev] Regarding Recent vs Most Popular parameters in Search api

2010-03-22 Thread Jeffrey Greenberg
I understand where your headed regarding 'search'... What i'd like is
that the default on the current search API be unchanged, so that it
still returns "recent" by default...  That way I don't have to change
my existing application: www.tweettronics.com to ensure it's adherence
to the current behavior.

ok?

To unsubscribe from this group, send email to 
twitter-development-talk+unsubscribegooglegroups.com or reply to this email 
with the words "REMOVE ME" as the subject.


[twitter-dev] Re: Getting server 500 errors starting on 1/25/2010 using show api

2010-01-27 Thread Jeffrey Greenberg
hmmm... i'm not seeing 500 errors anymore... either

transient problem?

j

On Jan 26, 5:18 pm, Kevin Marshall  wrote:
> That's what I see as well.
>
> - Kevinhttp://wow.ly
>
>
>
> On Tue, Jan 26, 2010 at 7:48 PM, Raffi Krikorian  wrote:
> > i'm confused - what are people seeing?  i'm seeing a 404 on that status, not
> > a 500.
> > [ra...@tw-mbp13-raffi twitter (homing_pigeon)]$ curl -v
> >http://twitter.com/statuses/show/15527375.xml
> > * About to connect() to twitter.com port 80 (#0)
> > *   Trying 168.143.162.68... connected
> > * Connected to twitter.com (168.143.162.68) port 80 (#0)
> >> GET /statuses/show/15527375.xml HTTP/1.1
> >> User-Agent: curl/7.16.3 (powerpc-apple-darwin9.0) libcurl/7.16.3
> >> OpenSSL/0.9.7l zlib/1.2.3
> >> Host: twitter.com
> >> Accept: */*
>
> > < HTTP/1.1 404 Not Found
> > < Date: Wed, 27 Jan 2010 00:47:26 GMT
> > < Server: hi
> > < X-RateLimit-Limit: 2
> > < X-Transaction: 1264553246-49270-7281
> > < Status: 404 Not Found
> > < Last-Modified: Wed, 27 Jan 2010 00:47:26 GMT
> > < X-RateLimit-Remaining: 19765
> > < X-Runtime: 0.02460
> > < Content-Type: application/xml; charset=utf-8
> > < Pragma: no-cache
> > < Content-Length: 150
> > < X-RateLimit-Class: api_whitelisted
> > < Cache-Control: no-cache, no-store, must-revalidate, pre-check=0,
> > post-check=0
> > < Expires: Tue, 31 Mar 1981 05:00:00 GMT
> > < X-Revision: DEV
> > < X-RateLimit-Reset: 1264555010
> > < Set-Cookie:
> > _twitter_sess=BAh7CToOcmV0dXJuX3RvIjJodHRwOi8vdHdpdHRlci5jb20vc3RhdHVzZXMv% 
> > 250Ac2hvdy8xNTUyNzM3NS54bWw6EXRyYW5zX3Byb21wdDA6B2lkIiVkYTI3NTQ0%250AODg1NW 
> > I1M2U2YmE0ZDk3ZjUzYTRkOTYyNSIKZmxhc2hJQzonQWN0aW9uQ29u%250AdHJvbGxlcjo6Rmxh 
> > c2g6OkZsYXNoSGFzaHsABjoKQHVzZWR7AA%253D%253D--c18561191b4733080388d38fa9461 
> > b6f851b16dc;
> > domain=.twitter.com; path=/
> > < Vary: Accept-Encoding
> > < Connection: close
> > <
> > 
> > 
> >   /statuses/show/15527375.xml
> >   No status found with that ID.
> > 
> > * Closing connection #0
>
> > On Tue, Jan 26, 2010 at 4:44 PM, Jeffrey Greenberg
> >  wrote:
>
> >> To be accurate: most  ids do work... We had no httpstatus 500 errors
> >> for quite a while, so this is new and different and bad behavior.
> >> We've had a working application that has been functioning for more
> >> than a year, and way back when these errors were frequent, and then
> >> Twitter did alot of new/good work and they've all but gone away (at
> >> least on this api)... until now.
> >> .
>
> >> On Jan 26, 12:39 pm, Kevin Marshall  wrote:
> >> > Yes - seems to be a problem for any id other than the example one in
> >> > the documentation:
>
> >> >http://twitter.com/statuses/show/1472669360.xml(works)
>
> >> >http://twitter.com/statuses/show/12735452.xml(reportsno statuses,
> >> > but this is my account and so I can confirm that there are statuses
> >> > there to report -- ashttp://twitter.com/users/show.xml?id=12735452
> >> > also confirms).
>
> >> > BTW - if you use the user_timeline method, I think you can get the
> >> > same status stuff
> >> > (http://twitter.com/statuses/user_timeline.xml?id=12735452)
>
> >> > - Kevin
>
> >> > On Tue, Jan 26, 2010 at 3:11 PM, Jeffrey Greenberg
>
> >> >  wrote:
> >> > > For instance:http://twitter.com/statuses/show/15527375.xml
>
> >> > > anyone else seeing these?
>
> > --
> > Raffi Krikorian
> > Twitter Platform Team
> >http://twitter.com/raffi


[twitter-dev] Re: Getting server 500 errors starting on 1/25/2010 using show api

2010-01-26 Thread Jeffrey Greenberg
To be accurate: most  ids do work... We had no httpstatus 500 errors
for quite a while, so this is new and different and bad behavior.
We've had a working application that has been functioning for more
than a year, and way back when these errors were frequent, and then
Twitter did alot of new/good work and they've all but gone away (at
least on this api)... until now.
.


On Jan 26, 12:39 pm, Kevin Marshall  wrote:
> Yes - seems to be a problem for any id other than the example one in
> the documentation:
>
> http://twitter.com/statuses/show/1472669360.xml(works)
>
> http://twitter.com/statuses/show/12735452.xml(reports no statuses,
> but this is my account and so I can confirm that there are statuses
> there to report -- ashttp://twitter.com/users/show.xml?id=12735452
> also confirms).
>
> BTW - if you use the user_timeline method, I think you can get the
> same status stuff (http://twitter.com/statuses/user_timeline.xml?id=12735452)
>
> - Kevin
>
> On Tue, Jan 26, 2010 at 3:11 PM, Jeffrey Greenberg
>
>
>
>  wrote:
> > For instance:http://twitter.com/statuses/show/15527375.xml
>
> > anyone else seeing these?


[twitter-dev] Re: Getting server 500 errors starting on 1/25/2010 using show api

2010-01-26 Thread Jeffrey Greenberg
For instance: http://twitter.com/statuses/show/15527375.xml

anyone else seeing these?


[twitter-dev] Getting server 500 errors starting on 1/25/2010 using show api

2010-01-26 Thread Jeffrey Greenberg
Since 2010-01-25 23:19:02   GMT we are seeing 500 server errors which
we haven't seen in a long while. Using the http://twitter.com/users/show.xml
api...  Not on every call though... seems as if some server in your
array/chain is choking somehow?



Re: [twitter-dev] Re: Anyone using phirehose?

2010-01-22 Thread Jeffrey Greenberg
You need to look into 'nohup'.
jeffrey greenberg

On Fri, Jan 22, 2010 at 10:45 AM, GeorgeMedia  wrote:

> Just in case anyone is having the same issue I had with PHP scripts
> running from the command line stopping on them, I discovered my
> problem.
>
> I was connecting to the linux server via SSH client remotely. I'd log
> into a bash shell, CD over to the directory and run my script in the
> background like -- "php script.php &" (the & is to run as a background
> process).
>
> The problem: I'm embarrassed to say that the problem was whenever my
> SSH client disconnected or timed out it killed the process I had
> running in that session. Don't know why it took me so long to connect
> those easy dots.
>
> The solution: Log into your system like normal then open another bash
> session inside your session (bash). Then execute your script and exit
> out of the extra session (exit).
>
> On Jan 15, 11:17 am, GeorgeMedia  wrote:
> > I'm looking for a solid PHP library to access the gardenhose and just
> > wondering if anyone is successfully implementing this using phirehose.
> > It seems to be the only one out there...
> >
> > This fairly dated code seems to work for random periods of time then
> > stops.
> >
> > http://hasin.wordpress.com/2009/06/20/collecting-data-from-streaming-...
>


[twitter-dev] Re: Streaming Api - Keywords matched

2009-11-03 Thread Jeffrey Greenberg

It would help if John Kalucki (hello) would clarify the difference
between what is visible via streaming as opposed to what is visible
via search.

I've been operating under the assumption that streaming is warranted
when an app needs a different or more powerful search than the current
one (e.g. nested boolean expressions), or is interested in seeing
tweets before they are filtered out by twitter's spam detection
(dealing with the tweet removal protocol, etc).)  As a developer it
would help us if you could paint out what the twitter data pipeline
looks like, and where the various apis plug in, so that we know what
we get when we plug in there.  I assume, for instance, that search is
farther downstream than the various firehose/stream apis, but I've
little idea (or documentation) on what steps the data is as it moves
down the pipe

Would Twitter be open to shedding some light?.
jeffrey greenberg
tweettronics.com

On Nov 3, 9:59 am, Fabien Penso  wrote:
> I agree, however it would help a lot because instead of doing :
>
> for keyword in all_keywords
>  if tweet.match(keyword)
>   //matched, notify users
>  end
> end
>
> we could do
>
> for keyword in keywords_matched
>  // same as above
> end
>
> for matching 5,000 keywords, it would bring the first loop from 5,000
> to probably 1 or 2.
> You know what you matched, so it's quiet easy for you just to include
> row data of matched keywords, I don't need anything fancy. Just space
> separated keywords would help _so much_.
>
>
>
> On Tue, Nov 3, 2009 at 3:15 PM, John Kalucki  wrote:
>
> > The assumption is that client services will, in any case, have to
> > parse and route statuses to potentially multiple end-users. Providing
> > this sort of hint wouldn't eliminate the need to parse the status and
> > would likely result in duplicate effort. We're aware that we are, in
> > some use cases, externalizing development effort, but the uses cases
> > for the Streaming API are so many, that it's hard to define exactly
> > how much this feature would help and therefore how much we're
> > externalizing.
>
> > -John Kalucki
> >http://twitter.com/jkalucki
> > Services, Twitter Inc.
>
> > On Nov 3, 1:53 am, Fabien Penso  wrote:
> >> Hi.
>
> >> Would it be possible to include the matched keywords in another field
> >> within the result from the streaming/keyword API?
>
> >> It would prevent matching those myself when matching for multiple
> >> internal users, to spread the tweets to the legitimate users, which
> >> can be time consuming and tough to do on lots of users/keywords.
>
> >> Thanks.


[twitter-dev] Field sizes documented as maximum "character counts" vs number of utf8 encoded bytes

2009-11-03 Thread Jeffrey Greenberg

The user description field (for instance) is documented as a maximum
of 160 characters, but how many bytes is that exactly?   If we get
bytes from twitter that are utf-8 bytes, will we then see a maximum
number of how many bytes per char, 1, 2, 3 or 4 bytes/char ?

This matters when spec'ing a database field size, for instance.

I'm asking whether Twitter is talking "characters" or 'utf-8 encoded
bytes" when it is specing a field in the docs?

thanks in advance...
(would appreciate it if this were clarified in the docs)



[twitter-dev] Re: Nero 9 - FULL Version - [Precracked] 51MB ONLY!

2009-10-19 Thread Jeffrey Greenberg
This looks just great... can't wait to try itj

On Mon, Oct 19, 2009 at 2:01 PM, Peter Denton wrote:

> I would say, considering I can only recall a few spam posts getting
> through, you guys [sic] do a great job.
>
>
> On Mon, Oct 19, 2009 at 1:34 PM, Chad Etzel  wrote:
>
>>
>> Why yes we can, and we do... loads of it.
>>
>> The problem is that these spammers are spoofing the "from" address of
>> list owners who usually get automatically posted and skip the
>> moderation step. This is a flaw of the way Google Groups handles
>> incoming posts, and not of the group admins.
>>
>> -Chad
>>
>> On Mon, Oct 19, 2009 at 4:28 PM, Dave Briccetti 
>> wrote:
>> >
>> > Google group admins can actually DELETE spam, too, which would be
>> > nice.
>> >
>>
>
>


[twitter-dev] Re: Twitter, Please Explain How Cursors Work

2009-10-07 Thread Jeffrey Greenberg
John,Please clarify this scenario. If one makes a complete set of calls
starting from cursor -1 unto the end at one moment, and then another set of
the same calls later is there any invariance?  If so what?

>From the statements above I understand:
- always 5000 followers are returned (if the user has more than 5000, and
the last call will have less)
- the order is the same: it's the time order that users followed this
account

And thus:
- there is no correlation in the API between a particular cursor and a set
of returned values (followers)

Is that it?


On Tue, Oct 6, 2009 at 4:12 PM, John Kalucki  wrote:

>
> I described, in some detail, the reasons for cursors here:
>
> http://groups.google.com/group/twitter-development-talk/msg/badfb7b6074aab10
>
> If the details are uninteresting, the high-level summary is this: The
> paged API was designed in a previous era. Paging is simply too
> expensive and totally impractical to provide with the current
> following counts. Also the QoS had deteriorated to the point where
> some doubted that anyone was seriously using the methods. Paging is
> going away and paging is not coming back.
>
> The cursored approach allows us to continue to provide access to the
> social graph via the REST API. As a benefit, QoS has been dramatically
> improved and data quality is now pretty close to perfect.
>
> If the implementation details and invariants described are confusing,
> then stick to the well worn part of the path: Request the first block
> with a cursor of -1. Keep requesting forward until you get a cursor of
> 0.
>
> -John Kalucki
> http://twitter.com/jkalucki
> Services, Twitter Inc.
>
> On Oct 6, 11:06 am, Jesse Stay  wrote:
> > I said the same thing in the last thread about this - still no clue what
> > Twitter is doing with cursors and how it is any different than the
> previous
> > paging methods.
> > Jesse
> >
> > On Tue, Oct 6, 2009 at 10:22 AM, Dewald Pretorius 
> wrote:
> >
> > > Thanks John. However, I will be the first to put up my hand and say
> > > that I have no clue what you said.
> >
> > > Can someone please translate John's answer into easy to understand
> > > language, with specific relation to the questions I asked?
> >
> > > Dewald
> >
> > > On Oct 5, 1:17 am, John Kalucki  wrote:
> > > > I haven't looked at all the parts of the system, so there's some
> > > > chance that I'm missing something.
> >
> > > > The method returns the followers in the reverse chronological order
> of
> > > > edge creation. Cursor A will have the most recent 5,000 edges, by
> > > > creation time, B the next most recent 5,000, etc. The last cursor
> will
> > > > have the oldest edges.
> >
> > > > Each cursor points to some arbitrary edge. If you go back and
> retrieve
> > > > cursor B, you should receive N edges created just before the edge-
> > > > pointed-to-by-B was created. I don't recall if N is always 5000,
> > > > generally 5000 or if it's at most 5000. This detail shouldn't matter,
> > > > other than, on occasion, you'll make an extra API call.
> >
> > > > In any case, retrieving cursor B will never return edges created
> after
> > > > the edge-pointed-to-by-B was created. All edges returned by cursor B
> > > > will be no-newer-than, and generally older than, than the
> edge-pointed-
> > > > to-by-B.
> >
> > > > So, all future sets returned by cursor B are always disjoint from the
> > > > set originally returned by cursor A. In your example, if you
> refetched
> > > > both A and B, the result sets wouldn't be disjoint as there are no
> > > > longer 5,000 edges between cursor A and cursor B.
> >
> > > > I think this, in part answers your question. ?
> >
> > > > -John Kaluckihttp://twitter.com/jkalucki
> > > > Services, Twitter Inc.
> >
> > > > On Oct 4, 6:10 pm, Dewald Pretorius  wrote:
> >
> > > > > For discussion purposes, let's assume I am cursoring through a very
> > > > > volatile followers list of @veryvolatile. We have the following
> > > > > cursors:
> >
> > > > > A = 5,000
> > > > > B = 5,000
> > > > > C = 5,000
> >
> > > > > I retrieve Cursor A and process it. Next I retrieve Cursor B and
> > > > > process it. Then I retrieve Cursor C and process it.
> >
> > > > > While I am processing Cursor C, 200 of the people who were in
> Cursor A
> > > > > unfollow @veryvolatile, and 400 of the people who were in Cursor B
> > > > > unfollow @veryvolatile.
> >
> > > > > What do I get when I go back from C to B? Do I now get 4,600 ids in
> > > > > the list?
> >
> > > > > Or, do I get 5,000 in B, which now includes a subset of 400 ids
> that
> > > > > were previously in Cursor A?
> >
> > > > > Dewald
>


[twitter-dev] Re: SERIOUS Problem With Cursors In JSON Followers/Friends Ids

2009-09-24 Thread Jeffrey Greenberg
My 2 cents are:1. we're using the xml form of the api on 32bit development
machines and it works fine.
2. not supporting 32bit machines seems like a bad idea for twitter and
developers, no matter who(twitter or php) you want to blame as the problem;
  a) PHP is perhaps the most popular web development language out there, so
why make this difficult.
  b) the cost of a 64 bit machine is more than a 32 bit one and this cost
matters to startups
3. conversion to strings is a "good idea"... it's a red herring to talk
about space, since once you get the data you can convert it as you wish, and
it goes over the wire as a string in any case (doesn't it)

jeffrey greenberg
http://www.tweettronics.com
http://www.inventivity.com

On Thu, Sep 24, 2009 at 8:34 AM, Jesse Stay  wrote:

> On Thu, Sep 24, 2009 at 9:26 AM, Dewald Pretorius wrote:
>
>> This goes for any large numbers, including tweet ids. As far as I am
>> concerned they can output everything in JSON as strings.
>>
>>
> That would create quite a memory footprint! I prefer to use ints where
> possible and strings only where necessary. I think it would be to your
> benefit to just convert to 64-bit PHP. While PHP is type-less, other
> languages aren't, and converting back to int is much more a pain in C than
> it is in PHP. I suggest Twitter leave it the way it is - it should be up to
> the end recipient to convert it in a way that works.  Maybe write some new
> JSON libraries that parse it correctly? That's what open source is for.
>
> Jesse
>


[twitter-dev] Re: 200 "errors"

2009-08-25 Thread Jeffrey Greenberg

I am seeing this error right now  when doing a search.  (FWIW: I'm
using since_id)
This is seriously messing things up!
@jeffGreenberg
@tweettronics

Details:

url: 
http://search.twitter.com/search.json?q=%23fail%20since%3A2009-08-19&rpp=100&since_id=3397530515

httpresponse = 200

returned text:
http://www.w3.org/
TR/1999/REC-html401-19991224/strict.dtd">












[twitter-dev] Re: My Issue with the ReTweet API and my solutions

2009-08-18 Thread Jeffrey Greenberg
I also am not on fire about this API... Since our app, www.tweettronics.com,
tracks user's twitter activities, I think that while retweeting has been
popular it's mainly been a tool for spammage as part of user's desire for
self-promotion and less as a tool for user attibution.  Still the users have
taken it up... it's just that what I'm seeing is that it's primiarily used
to generate tweets and thus attention to one's account...
On the other hand, it at least formalizes what is going on, making it easier
to track retweeting behavior, to the extent that such tracking has any
fundamental value as far as estimating user influence, given the signal to
noise ratio that is going on
I do think that this API can help solve other issues, such as the challenge
of having threaded tweets.  Is support of threaded tweets an intended effect
of the api?

jeffrey greenberg
http://www.tweettronics.com
http://www.jeffrey-greenberg.com


[twitter-dev] Re: 2 week advance notice: changes to /friends/ids and /followers/ids

2009-08-04 Thread Jeffrey Greenberg

Chiming in: Please do support both methods of access for 'a while"
rather than a hard cutover... thx!  At least two week would be
appreciated...
jeffrey greenberg
http://www.inventivity.com
http://www.tweettronics.com

On Aug 4, 10:15 am, Alex Payne  wrote:
> What our infrastructure team has told me is that they can support both 
> behaviors for a limited period of time


[twitter-dev] Re: Intermittent network failures?

2009-07-12 Thread Jeffrey Greenberg

I spoke too casually.  For the sake of accuracy: I too do not see this
as a new problem: it's been going on for months, not just weeks or
just recently...

On Jul 10, 1:17 pm, Dossy Shiobara  wrote:
> On 7/10/09 3:38 PM, Jeffrey Greenberg wrote:
>
> > Just to say it, this has been going on for weeks
>
> Actually, months ... at least as far as I've noticed it.
>


[twitter-dev] Re: Intermittent network failures?

2009-07-10 Thread Jeffrey Greenberg

Just to say it, this has been going on for weeks

jeffrey
http://www.tweettronics.com


On Jul 10, 11:52 am, Matt Sanford  wrote:
> Hi all,
>
>      There is currently a back-end issue and our operation folks are  
> working on it. Hopefully it will be resolved soon. I'll update you  
> when I know more.
>
> Thanks;
>   – Matt Sanford / @mzsanford
>       Twitter Dev
>
> On Jul 10, 2009, at 11:51 AM, João Pereira wrote:
>
>
>
> > Hi,
>
> > I'm also having some problems working with twitter API since the  
> > past few hours. Even with the Web interface I'm not able to complete  
> > a follow action, for example.
>
> > It's there anything going on?
>
> > On Fri, Jul 10, 2009 at 6:59 PM, Jeffrey Greenberg 
> >  > > wrote:
>
> > I'm not sure what these are but I see them often enough to wonder
> > about the reliability of the network between Twitter and my app.  The
> > portion of my app the speaks with Twitter runs on Amazon AWS/EC2.  I
> > see a small  variety of Curl failures that occur throughout the day.
> > I'm not clear whether these reflect Twitter issues, EC2 issues, or my
> > app.
>
> > I'd appreciate any illumination as to which of these might be Twitter
> > issues and which are not...
>
> > I see 5 different Curl failures and take together with the various
> > Twitter Api I am using, there 23 different variants all together. Here
> > they are:
>
> > Curl error: 0.  
> > url:http://twitter.com/followers/ids.xml?user_id=18057710&page=1
> > Curl error: 0.  url:http://twitter.com/followers/ids.xml?user_id=19966258
> > Curl error: 0.  url:http://twitter.com/friends/ids.xml?user_id=14080067
> > Curl error: 0.  
> > url:http://twitter.com/friends/ids.xml?user_id=14623539&page=1
> > Curl error: 0.  
> > url:http://twitter.com/users/show.xml?screen_name=BryanMcKinney
> > Curl error: 0.  url:http://twitter.com/users/show.xml?
> > user_id=10063932
> > Curl error: 7. couldn't connect to host url:
> >http://search.twitter.com/search.json?q=neutrogena%20AND%20ultra%20sh...
> > Curl error: 7. couldn't connect to host url:
> >http://twitter.com/followers/ids.xml?user_id=11601722&page=1
> > Curl error: 7. couldn't connect to host url:
> >http://twitter.com/followers/ids.xml?user_id=17825053
> > Curl error: 7. couldn't connect to host url:
> >http://twitter.com/friends/ids.xml?user_id=13436432&page=1
> > Curl error: 7. couldn't connect to host 
> > url:http://twitter.com/friends/ids.xml?user_id=21937700
> > Curl error: 7. couldn't connect to host 
> > url:http://twitter.com/users/show.xml?screen_name=L4S7
> > Curl error: 7. couldn't connect to host 
> > url:http://twitter.com/users/show.xml?user_id=10108342
> > Curl error: 18. transfer closed with 150 bytes remaining to read url:
> >http://twitter.com/users/show.xml?user_id=53631710
> > Curl error: 26. Failed to open/read local data from file/application
> > url:http://twitter.com/friendships/create.xml?screen_name=
> > /* OK this one is obviously a bug in my App  */
> > Curl error: 26. Failed to open/read local data from file/application
> > url:http://twitter.com/friendships/create.xml?screen_name=1WineDude
> > Curl error: 52. Empty reply from server url:
> >http://search.twitter.com/search.json?page=12&max_id=2500368394&rpp=1...
> > Curl error: 52. Empty reply from server url:
> >http://twitter.com/followers/ids.xml?user_id=11601722&page=1
> > Curl error: 52. Empty reply from server url:
> >http://twitter.com/followers/ids.xml?user_id=15476479
> > Curl error: 52. Empty reply from server 
> > url:http://twitter.com/friends/ids.xml?user_id=27641196
> > Curl error: 52. Empty reply from server url:
> >http://twitter.com/friends/ids.xml?user_id=37113325&page=1
> > Curl error: 52. Empty reply from server url:
> >http://twitter.com/users/show.xml?screen_name=Cardenas79
> > Curl error: 52. Empty reply from server 
> > url:http://twitter.com/users/show.xml?user_id=1233581
> > ~
> > ~


[twitter-dev] Intermittent network failures?

2009-07-10 Thread Jeffrey Greenberg

I'm not sure what these are but I see them often enough to wonder
about the reliability of the network between Twitter and my app.  The
portion of my app the speaks with Twitter runs on Amazon AWS/EC2.  I
see a small  variety of Curl failures that occur throughout the day.
I'm not clear whether these reflect Twitter issues, EC2 issues, or my
app.

I'd appreciate any illumination as to which of these might be Twitter
issues and which are not...

I see 5 different Curl failures and take together with the various
Twitter Api I am using, there 23 different variants all together. Here
they are:

Curl error: 0.  url: 
http://twitter.com/followers/ids.xml?user_id=18057710&page=1
Curl error: 0.  url: http://twitter.com/followers/ids.xml?user_id=19966258
Curl error: 0.  url: http://twitter.com/friends/ids.xml?user_id=14080067
Curl error: 0.  url: http://twitter.com/friends/ids.xml?user_id=14623539&page=1
Curl error: 0.  url: http://twitter.com/users/show.xml?screen_name=BryanMcKinney
Curl error: 0.  url: http://twitter.com/users/show.xml?user_id=10063932
Curl error: 7. couldn't connect to host url:
http://search.twitter.com/search.json?q=neutrogena%20AND%20ultra%20sheer&rpp=100
Curl error: 7. couldn't connect to host url:
http://twitter.com/followers/ids.xml?user_id=11601722&page=1
Curl error: 7. couldn't connect to host url:
http://twitter.com/followers/ids.xml?user_id=17825053
Curl error: 7. couldn't connect to host url:
http://twitter.com/friends/ids.xml?user_id=13436432&page=1
Curl error: 7. couldn't connect to host url: 
http://twitter.com/friends/ids.xml?user_id=21937700
Curl error: 7. couldn't connect to host url: 
http://twitter.com/users/show.xml?screen_name=L4S7
Curl error: 7. couldn't connect to host url: 
http://twitter.com/users/show.xml?user_id=10108342
Curl error: 18. transfer closed with 150 bytes remaining to read url:
http://twitter.com/users/show.xml?user_id=53631710
Curl error: 26. Failed to open/read local data from file/application
url: http://twitter.com/friendships/create.xml?screen_name=
/* OK this one is obviously a bug in my App  */
Curl error: 26. Failed to open/read local data from file/application
url: http://twitter.com/friendships/create.xml?screen_name=1WineDude
Curl error: 52. Empty reply from server url:
http://search.twitter.com/search.json?page=12&max_id=2500368394&rpp=100&q=stock+market+since%3A2009-07-05
Curl error: 52. Empty reply from server url:
http://twitter.com/followers/ids.xml?user_id=11601722&page=1
Curl error: 52. Empty reply from server url:
http://twitter.com/followers/ids.xml?user_id=15476479
Curl error: 52. Empty reply from server url: 
http://twitter.com/friends/ids.xml?user_id=27641196
Curl error: 52. Empty reply from server url:
http://twitter.com/friends/ids.xml?user_id=37113325&page=1
Curl error: 52. Empty reply from server url:
http://twitter.com/users/show.xml?screen_name=Cardenas79
Curl error: 52. Empty reply from server url: 
http://twitter.com/users/show.xml?user_id=1233581
~
~


[twitter-dev] Re: Spamming via addition of trending words to tweets

2009-07-08 Thread Jeffrey Greenberg

I'm liking Andrew's thoughts regarding sensitivity to what spam is,
and am thinking about the gmail like vote-if-spam approach.

Wondering if the api community (or really twitterers) would use an api
such as this:  smellsLikeSpam( list_of_tweet_ids )...  Twitter could
aggregate and apply policy to resulting votes.

If you're doing a twitter interface app, then you've got to provide
this unpleasant activity to users which they currentl don't have to do
now.  But gmail is an argument in favor of it working well and not
being too onerous on users in the aggregate.

I think the problem in general is more dire for search-based
functionality than for general tweeting, since search picks up not
only (somewhat) older tweets but new ones and potentially in very
large quantities.  So if I pick up a tweets yesterday and today it
becomes spam, I'll want to know about that and be ale to toss the
tweet (god what phrase), which has implications for apps such as mine
and for twitter too...

.







[twitter-dev] Re: Spamming via addition of trending words to tweets

2009-07-07 Thread Jeffrey Greenberg

Alex, so you're saying that we ought to auto-report spamming that we
detect.

And I guess we have to formulate some spam detection strategies of our
own...

And obviously you're dealing with spam of different kinds already:
@spamming, follower spamming to name two of em... but can you speak to
this particular one which screws up search results?  Does Twitter do
spam detection on tweets?  I guess you should be somewhat secretive on
the approach so that spammers cannot workaround it easily, but can we
expect more aggressive filtering from Twitter itself or is this really
a full-blown app responsibility?

Thanks.
jeffrey

On Jul 7, 3:59 pm, Alex Payne  wrote:
> Anyone can send a Direct Message to @spam with the username of a potential
> spammer. We factor those reports into our automated spam detection tools.
> We're well aware of the issue, and we appreciate the help.
>
> On Tue, Jul 7, 2009 at 15:41, Jeffrey Greenberg
> wrote:
>
>
>
>
>
>
>
> > So i'm seeing a ton of tweet spam that appends the trending topics to
> > the tweet.  For example, "Hey here is myhttp://spam/1234Michael
> > Jackson MJ iran"
>
> > They get picked up by searches ( for instance see the search "stock
> > market" athttp://www.tweettronics.com )
>
> > What is Twitter doing or planning on doing to deal with this?  It has
> > been noted elsewhere that any tweet with 3 or more trending topics is
> > likely to be spam... Will Twiitter institute an automated spam
> > rejection through the API let alone through it's other interfaces?
>
> > I suppose we've entered the era of dealing with Twitter spam with all
> > our apps... ugh
>
> > Please advise
>
> > jeffrey greenberg
> >http://www.jeffrey-greenberg.com
> >http://www.tweettronics.com
>
> --
> Alex Payne - Platform Lead, Twitter, Inc.http://twitter.com/al3x


[twitter-dev] Spamming via addition of trending words to tweets

2009-07-07 Thread Jeffrey Greenberg

So i'm seeing a ton of tweet spam that appends the trending topics to
the tweet.  For example, "Hey here is my http://spam/1234 Michael
Jackson MJ iran"

They get picked up by searches ( for instance see the search "stock
market" at http://www.tweettronics.com  )

What is Twitter doing or planning on doing to deal with this?  It has
been noted elsewhere that any tweet with 3 or more trending topics is
likely to be spam... Will Twiitter institute an automated spam
rejection through the API let alone through it's other interfaces?

I suppose we've entered the era of dealing with Twitter spam with all
our apps... ugh

Please advise

jeffrey greenberg
http://www.jeffrey-greenberg.com
http://www.tweettronics.com


[twitter-dev] Re: PostTwitpocalypse: Php / windows wierdness in handling larged tweet ids?

2009-06-15 Thread Jeffrey Greenberg

I have found that upgrading my windows PHP development environment to
5.2.6 (fixed the problem)... It may be that 5.2.4 is sufficient
though...

(...Well i'm having a great time talking to myself...)


[twitter-dev] Re: PostTwitpocalypse: Php / windows wierdness in handling larged tweet ids?

2009-06-15 Thread Jeffrey Greenberg
fyi: it appears this is a well-known php shortcoming in certain 32 bit
systems... php takes ints as signed... so you have to convert to double and
pay attention to some php.ini settings for precision/mantissa conversion to
get consistent results across systems
But i'm still not clear if i can convert the double back into the proper
type when I interface to mysql...  again, and illumination would be
helpful...


[twitter-dev] PostTwitpocalypse: Php / windows wierdness in handling larged tweet ids?

2009-06-15 Thread Jeffrey Greenberg

I'm doing development on a windows box (32bit) with lamp/php and
production is linux based, running on ec2...   I'm handling all tweet
ids as 32bit unsigned in mysql so I'm ok with the twitpocalypse, at
least on the  linux systems. But my development stuff under windows is
choking on  negative tweetids.

I can see the search results Twitter is sending me as JSON and it has
a new big tweet id.  The php code calls json_decode( $jsonStr ) to get
the data into php types. When I look at the tweet id immediately after
the json_decode() I see a negative value, and so I'm sunk at that
point on my windows/dev box.

I'm wondering if this is a json_decode issue, or a php issue under
windows?  And what can I do about it?
windows php is: 5.2.3, and linux php is; 5.2.4

Appreciating your responses and suggestions in advance!

jeffrey
http://www.tweettronics.com
http://www.jeffrey-greenberg.com


[twitter-dev] Re: lots of 404s?

2009-05-28 Thread Jeffrey Greenberg
I seem to be picking these up from the social graph... are they ever elided
from there?


[twitter-dev] Re: lots of 404s?

2009-05-28 Thread Jeffrey Greenberg
hmm... Chrome sometimes shows the xml but mostly just a 404 error -- the
latter is confusing as to what's going on...
Anyway, why are there so many? Admittedly I'm plowing through hundreds of
thousands of users, but it *seems* like a lot of them are 'suspended'...
 What is the lifetime of a suspended user? When does that object disppear
entirely from the system? Or does it not?


[twitter-dev] lots of 404s?

2009-05-27 Thread Jeffrey Greenberg
I'm seeing alot of 404 failures... some are for users that are suspended and
some are just failng.  These are id's i'm getting from the social
graph apise.g.
this is failing: http://twitter.com/users/show.xml?user_id=41714775

here's some more from my logs:
getuser failed: /33687642 - /33687642 get user  httpstatus: 404
getuser failed: /37079194 - /37079194 get user  httpstatus: 404
getuser failed: /37616625 - /37616625 get user  httpstatus: 404

My site: www.tweettronics.com is whitelisted btw...


[twitter-dev] Re: Search: Resolution of Since, and howto avoid pulling redundant search results

2009-05-22 Thread Jeffrey Greenberg
i've got a working solution as far as pulling in tweets doing pretty much as
I said, except that.it will fail when there is a burst of tweets.  For some
very active search term, say something that exceeds the 1500 search limit
(15 pages x100tweets/pg) per day... Tweets will be missed.  For my
application, i the odds are that missing a small quantity of tweets isn't
earth shattering, but there's a *chance* it could be..  I think of this as a
twitter shortcoming...  Wondering if it's worth filing a low-priority bug
for it?



On Fri, May 22, 2009 at 1:24 PM, Doug Williams  wrote:

> As the docs [1] state the correct format is since:-MM-DD which give you
> resolution down to a day.  Any further processing must be done on the client
> side. Given the constraints, utilizing a combination of since: and since_id
> sounds like a great solution.
> 1. http://search.twitter.com/operators
>
> Thanks,
> Doug
> --
>
> Doug Williams
> Twitter Platform Support
> http://twitter.com/dougw
>
>
>
>
>
> On Fri, May 22, 2009 at 8:05 AM, Jeffrey Greenberg <
> jeffreygreenb...@gmail.com> wrote:
>
>> What is the resolution of the 'since' operator?  It appears to be by the
>> day, but I'd sure like it to be by the minute or second.
>> Can't seem to find this in the docs.
>>
>> The use case is that I want to minimize pulling searches results that i've
>> already got.   My solution is to record the time of the last search and the
>> last status_id, and ask for subsequent searches from the status_id. If that
>> fails because it's out of range, I'll ask by the last search date.  Is this
>> the way to go?
>>
>>
>> http://www.tweettronics.com
>> http://www.jeffrey-greenberg.com
>>
>>
>


[twitter-dev] Search: Resolution of Since, and howto avoid pulling redundant search results

2009-05-22 Thread Jeffrey Greenberg
What is the resolution of the 'since' operator?  It appears to be by the
day, but I'd sure like it to be by the minute or second.
Can't seem to find this in the docs.

The use case is that I want to minimize pulling searches results that i've
already got.   My solution is to record the time of the last search and the
last status_id, and ask for subsequent searches from the status_id. If that
fails because it's out of range, I'll ask by the last search date.  Is this
the way to go?


http://www.tweettronics.com
http://www.jeffrey-greenberg.com


[twitter-dev] What is current search time range limit?

2009-04-27 Thread Jeffrey Greenberg

Is there an absolute time limit past which searches will fail? What is
that limit exactly

I thought I had read 4 months but don't see this info in the new
documentation. Furthermore, with the API I am seeing something like 25
days ... example: search "dominoes pizza".  I searched this on 4/17
and had responses from march, but now, one week later (today is
4/27/2009 ) I see no responses before April 2, and yet I know there
were matching tweets in march...

Appreciate your clarification of the documentation...


[twitter-dev] Combining search operators: possible? Limitations!

2009-04-21 Thread Jeffrey Greenberg

I'm not clear on what is possible with search. I've played with the
advanced search page, and with the api directly, but I could still use
some explanation...

There are the usual "and" "or" "not" operators but it does not appear
to me that these can be combined in anything but the most simple ways.
Is that right? There seems to be a lack of a 'grouping' operator to
combine these expressions.   Instead, the api seems to list these
operators together so that they can all be present at once... If so,
then how are the operators taken together when they are all supplied?

How are "quoted strings" handled with these? I'm not clear

Then there is "phrase" which implies a word ordering, but how can a
phrase be combined with the above operators?  And does a "quoted
string" differ from a phrase?  And what if a quoted string is suppled
in the "word" field, and something else in the "phrase" field

The operators such as "tude" and "tags" are also confusing, since one
can supply these character sequences in the other fields.  How are
these combined with the above operators?

The date operators seem clear enough.

Then there is the total count of what is matched which is mostly
missing with the exception that it unreliably appears at the end of a
series of pages.

I'd appreciate your clarifying wisdom,
signed 'Confused Seeker in SF'


[twitter-dev] Re: API Changes for April 1, 2009

2009-04-02 Thread Jeffrey Greenberg

Doug,
Grumble: just to say it, this wasn't handled well at all.  The fact
that this field disappears whether due to caching or through a coding
error has the same result of completely breaking my app.

How long will it take for this issue to clear up? Days? How many
exactly?  and after X days will further requests be populated
correctly?

thx,
jeffrey
http://www.tweettronics.com


[twitter-dev] Re: Can we make this a private list?

2009-03-30 Thread Jeffrey Greenberg
>
> Let's please keep this list focused on developers working with/on the
> twitter api... other uses, like the promotion of application or looking for
> help with alpha testing of applications is not appropriate (though we can
> sympathize with the problem).
>

RE: doug's question about making 'basic information more accessible"... it's
pretty accessible, and simple, and I think nicely summarized on one page
(albeit a _large_ page) ... and sometime RTFM is the right response ...
 careful of bloating the help so that it becomes unreadably large...

cheers!
jeffrey
http://www.jeffrey-greenberg.com


[twitter-dev] Re: followers/ empty arrays!

2009-03-26 Thread Jeffrey Greenberg
This has been reported before too. Between this and bug  #362, there is no
way to reliably get accurate information for users with more than 200k
followers... While there aren't many of them (at this point), they are
influential and visible, so this situation is really hurting me as well. (
http://www.tweettronics.com)

bug #362 is classified as 'low' priority, but for a certain sort of app
(like twinfluence) this can be profoundly disabling...

cheers!
jeffrey


[twitter-dev] Re: Friends and followers listing ends abruptly for large numbers

2009-03-18 Thread Jeffrey Greenberg
Interesting... I've reported this also: I'm seeing consistent 502 errors on
users with large follow lists when using the social api
The fact is that it's inconsistent: i am able to see page 648 and 649, but
not 1000...

On Wed, Mar 18, 2009 at 11:29 AM, Andrew Badera  wrote:

> Google Is Your Friend -- this issue has come up more than once recently.
> Check the list archives.
>
>
>
>
> On Wed, Mar 18, 2009 at 11:51 AM, Patrick  wrote:
>
>>
>> I'm using the api to retrieve friends and followers for a popular user
>> but it seems the api and the twitter friends and followers webpage end
>> the listing quite early in the listing. (I'm assuming the webpages
>> just use the api behind the scenes)
>>
>> For example check out stephenfry's profile...
>>
>> He's got 316888 followers which should result in over 15000 pages of
>> followers (20 per page). However if you go to, say, page 1000 there
>> are no results: http://twitter.com/stephenfry/followers?page=1000
>>
>> (the listing actually ends on page 647:
>> http://twitter.com/stephenfry/followers?page=647)
>>
>> I'm seeing the exact same issue with my client. The api also stops the
>> listing early.
>>
>> Anyone able to shed any light?
>>
>
>


[twitter-dev] Re: Consistent 502 errors for users with large friend & follower lists

2009-03-04 Thread Jeffrey Greenberg
Protocol Buffers is yet another RPC scheme that requires compilation of the
data types. If on the other hand you define simple data types this can be
much simpler and finessed, and including dealing with such RPC issues as
endian-ness.   wondering if is there any sort of compression of XML that
is done solely in utf8 encoding (rather than binary)?
Also, I'm not wanting a solution that involves paging the data, since the
data I'm after is the entire list.  Breaking up the data does nothing for me
except take longer and increase bandwidth needs. I want the entire chunk
with the least cost to Twitter and myself of time and cpu/bandwidth... so
smaller data is better.  Paging is also bad because of it will cost me as a
rate-limiting penalty... unless the page sizes are really large... 8-)
Speaking of which, BarackObama is the worst by far with 600k+ lists... but
is the trend only going to get worse?  When/if Twitter really gets popular
will we be seeing users with 1mil users lists?  At some point, even with a
binary RPC scheme and data compression this can get expensive...  (me thinks
caching on my side is unavoidable)

On Tue, Mar 3, 2009 at 1:13 PM, Alex Payne  wrote:

>
> That would definitely require us to weigh our current knowledge of
> Thrift vs Protocol Buffers. I'll think about it.
>
> On Tue, Mar 3, 2009 at 12:42, Dossy Shiobara  wrote:
> >
> > On 3/3/09 3:07 PM, Alex Payne wrote:
> >>
> >> We're fully aware of what Protocol Buffers are their intended use. We
> >> use Thrift, Facebook's clone of Protocol Buffers. You might note the
> >> use of the world "internal" in the material you quoted.
> >
> > Quoting from http://code.google.com/apis/protocolbuffers/docs/faq.html:
> >
> >"We would like to provide public APIs that accept protocol buffers as
> > well as XML, both because it is more efficient and because we're just
> going
> > to convert that XML to protocol buffers on our end anyway."
> >
> > Their use of the word "internal" simply clarifies where they _currently_
> use
> > it, not its limitation.
> >
> > Could Twitter be the first service to offer protocol buffers?  Sure.  I
> > guess you're saying it's not going to happen, though.
> >
> > --
> > Dossy Shiobara  | do...@panoptic.com | http://dossy.org/
> > Panoptic Computer Network   | http://panoptic.com/
> >  "He realized the fastest way to change is to laugh at your own
> >folly -- then you can let go and quickly move on." (p. 70)
> >
>
>
>
> --
> Alex Payne - API Lead, Twitter, Inc.
> http://twitter.com/al3x
>


[twitter-dev] Re: Consistent 502 errors for users with large friend & follower lists

2009-03-03 Thread Jeffrey Greenberg

True JSON is probably more compact.   But NO to Google's Protocol
Buffers - it's yet another RPC interface requiring compilation.

But really I want to focus on the 502 errors!


[twitter-dev] Consistent 502 errors for users with large friend & follower lists

2009-03-02 Thread Jeffrey Greenberg

My app (tweettronics.com) fetches friends and followers for a given
user with the following pair of calls, done one immediately after the
other:
http://twitter.com/friends/ids.xml
http://twitter.com/followers/ids.xml

They take quite a while on large users (understandably up to a 2-3megs
of data via XML encoding), but worse they often fail with a 502 error.

It's easy to see on user barackobama and less frequently as you go
down the top 10 lists... e.g. on ev ... somewhere around 200k
followers it's less frequent

Can this be addressed on your side?

BTW: I want this data pretty fresh and I'd like to avoid duplicating
the Twitter DB, so I'm wanting to avoid caching these calls... still I
can imagine caching as viable just to improve the performance of
transferring the large ist   Nonetheless, since you are deep into
facing massive data growth, I'm wondering if there are any interesting
alternatives to a scheme that transfers something other than XML, one
that pack more data/byte?

Thanks
Jeffrey

http://www.jeffrey-greenberg.com