[twitter-dev] 413. HTTP/1.1 413 FULL head

2009-11-12 Thread Julien

Hello,

We (http://superfeedr.com) have a lot of users who gave us Twitter
feeds (both users + track) to monitor on their behalf. A few months
ago we started having more and more feeds, so we asked to be
whitelisted.
We now have about 15k, and even the white-listing  is not enough and
we see too many errors from Twitter.

We decided to implement the streaming API so that everybody is happy :
our users  -> they get faster updates, twitter -> less polling.

Unfortunately, we're seeing errors from Twitter :  invalid status
code: 413. HTTP/1.1 413 FULL head

I think that we're tracking too many feeds through that API, because
it works when we have less track keywors/userids. Is there any way
around?  (we have access to "shadow")

Thanks,


[twitter-dev] 413 FULL head error on Streaming API

2009-11-12 Thread Julien

Hi,

I work for Superfeedr. We do feed parsing on-demand. We don't care
what feeds people are giving us to fetch. As a matter of fact, we
don't even care about the content. Yet, some users have given us RSS/
Atom feeds from Twitter to fetch.

We bumped into the polling limits a few months ago : we asked to be
white-listed and "gained" a few month. We now have about 15k Twitter
feeds, and polling is not suitable anymore. We implemented your
streaming API to make everybody happy.
Except that it deosn't work. : we get  "413 FULL head" errors. I asked
Ryan Sarver for help, he granted us a "shadow" role, which is really
not what we need.

What can we do?

Thanks



[twitter-dev] Re: 413 FULL head error on Streaming API

2009-11-12 Thread Julien

Thanks John, I'll review all that and we'll post more info soon.

Thanks for listening!

On Nov 12, 6:39 am, John Kalucki  wrote:
> 413 usually means "too long" on the Streaming API. Too many
> predicates, or perhaps a URL of crazy length. This is documented in
> the wiki.
>
> First, be sure that you are using a POST parameter and not encoding
> your predicates in the URL. Second, look at the text message that is
> returned with the 413. It will tell you what you are doing wrong. Most
> likely, you are attempting to follow more users than the Shadow role
> allows.
>
> Detail your use case, and perhaps then we can give some advice on the
> availability of the data and, if available, the most practical way to
> obtain the data. We do not make every arbitrary materialization of the
> data available.
>
> -John Kaluckihttp://twitter.com/jkalucki
> Services, Twitter Inc.
>
> On Nov 11, 11:49 pm, Julien  wrote:
>
>
>
> > Hi,
>
> > I work for Superfeedr. We do feed parsing on-demand. We don't care
> > what feeds people are giving us to fetch. As a matter of fact, we
> > don't even care about the content. Yet, some users have given us RSS/
> > Atom feeds from Twitter to fetch.
>
> > We bumped into the polling limits a few months ago : we asked to be
> > white-listed and "gained" a few month. We now have about 15k Twitter
> > feeds, and polling is not suitable anymore. We implemented your
> > streaming API to make everybody happy.
> > Except that it deosn't work. : we get  "413 FULL head" errors. I asked
> > Ryan Sarver for help, he granted us a "shadow" role, which is really
> > not what we need.
>
> > What can we do?
>
> > Thanks


[twitter-dev] Re: 413 FULL head error on Streaming API

2009-11-12 Thread Julien

For some reason, my previous post didn't show up :/
So, we don't get the error when we stay in the limitations of the
default access level (200 track and 400 userids).

If we go beyond that, we get : HTTP/1.1 413 Request Entity Too Large

Could it be that our username hasn't been approved despite what the
interface says? Our username is "superfeedr_trac"

Thanks,

Julien



On Nov 12, 8:16 am, Julien  wrote:
> Thanks John, I'll review all that and we'll post more info soon.
>
> Thanks for listening!
>
> On Nov 12, 6:39 am, John Kalucki  wrote:
>
>
>
> > 413 usually means "too long" on the Streaming API. Too many
> > predicates, or perhaps a URL of crazy length. This is documented in
> > the wiki.
>
> > First, be sure that you are using a POST parameter and not encoding
> > your predicates in the URL. Second, look at the text message that is
> > returned with the 413. It will tell you what you are doing wrong. Most
> > likely, you are attempting to follow more users than the Shadow role
> > allows.
>
> > Detail your use case, and perhaps then we can give some advice on the
> > availability of the data and, if available, the most practical way to
> > obtain the data. We do not make every arbitrary materialization of the
> > data available.
>
> > -John Kaluckihttp://twitter.com/jkalucki
> > Services, Twitter Inc.
>
> > On Nov 11, 11:49 pm, Julien  wrote:
>
> > > Hi,
>
> > > I work for Superfeedr. We do feed parsing on-demand. We don't care
> > > what feeds people are giving us to fetch. As a matter of fact, we
> > > don't even care about the content. Yet, some users have given us RSS/
> > > Atom feeds from Twitter to fetch.
>
> > > We bumped into the polling limits a few months ago : we asked to be
> > > white-listed and "gained" a few month. We now have about 15k Twitter
> > > feeds, and polling is not suitable anymore. We implemented your
> > > streaming API to make everybody happy.
> > > Except that it deosn't work. : we get  "413 FULL head" errors. I asked
> > > Ryan Sarver for help, he granted us a "shadow" role, which is really
> > > not what we need.
>
> > > What can we do?
>
> > > Thanks


[twitter-dev] Re: 413 FULL head error on Streaming API

2009-11-12 Thread Julien
ling,actocom,testcricket,sufjan,download,mypageo,eleicoes2009,invest,JRDNLive,bermuda,catchoffer,nicovideo,roadhouse,coral,customer,masons,hasidic,mason,tofubeats,lifepass,frugal,geneity,durga,wisdom,openbet,market,vosiznies,giants,ballet,choicestream,bills,monza,Emmys,jewish,swine,rehava,dinamo,Feedback,glutenfreefaces,junkma,atigeo,droidcon,wisdon,inmyhood,betfair,cadsoft,discover,tweetDC,agent,nata2,esperanto,chinese,pixiv,historychannel,funny,nonoriri,hafizulazeez,erlang,gonbuto,yugui,oquno,rakusai,tsunami,gaogaogao,ZxAxTxSxUxOxNxN,barcampchs,bengals,ksasao,hitode909,rhythmsift,watch_akiba,trapezoid,DJWILDPARTY,kusigahama,wild_man,Shako_Pani,kuzuha,belleville,kalamazoo,MobileHackerz,hironica,yteppei,hkhumanoid,koseijin,postlets,a_kodama,enkiv2,yanbe,hajimehoshi,daikichi_blue,Philippines,sfc_orf,mat2uken,wilfue,deutsch,buchmesse,sakushin,rackspace,yusukezzz,boonies,tacke,memebomb,sj_tga,nojiri_h,H_D_D,quinte,kevinleversee,FollowFriday,lesondes,teraiman,removeall,liepzig,smartfade,cloud,aviary,kensuzuki,takimo,yummly,nepholologist,doodoo_jap,optimost,monae,nanhai,hoverhell,topdesblogs,naka3,REMOVE,paras,shokai,takaoka,genshi,shimobayashi,dai_okabe,takano32,bedbak,nintendo,automobile,Thomas,paulorcf,photos,hydrogeology,Krishna,music,tiichan,cricket,xlamp,statistics,rasmiranjan,Fitmitjulia,steelman,avila,foobar,browns,nuvaring,clt20,Monson,karthiktrue,blair,windows,ura_g7,arpan,
11eyes,Obama,nooblast,President,cck09,zilverenkruis,korea,zilverenkruisachmea,rajiv,s12bt,achmea,chicagokarl,science,bvalani,frozenworm,takkkun,fredbvalani,server,ignite,photoshop,quecy,smitu,jaisankar,serkantoto,sabu_mikawaya,kotobuki,goahead256,pubsubhubbud,koizuka,madrid2016,genschi,natant,MindTouch,zorgverzekering,risenberg,killerstartups,terebe,corkd,brown2020,bosschaang,rajivloveyou21,scrapbook,Ilayaraja,isuca,zilveren,tmura,verobus,rer2009,kanekure,rubyenrails,philosophy,macarthur,statitics,wanna,mangalore,shashi,global,singing,instrumentation,royal,hametsu,beatcancer,Mozilla,mallu,akshay,stephankaag,rubyonrails,sprint,telugu,dogeza,webbie,cookeville,cybershot,singer,psychiatry,kalmadi,keyword,WiMAX,economics,Buddhism,glocom,entomology,memeo,randomationword,kaleidico,sports,rolyal,Follow,climate,holderhq,koalition,cigarcook,health,paranormal,torrent,kanji,mogra,mexico,chicago,opera,illlinois,nevada,boulder,ryankanno,harper2,superfeedr,rails,russia,pubsub,monkeys,david,pkitano,atkinsm,california,hoffman,indiana,happiness,lentil,julien51,harper,thomasknoll,colorado,detour1999,spain,ejabberd,pongpaet,obama,twtmycard,parislemon,foursquare,gadfly,hardstyle,mindstorms,blackhat,guwahati,toronto,testing,kanye,elvis,lawyersclub,nixonmcinnes,Hamilton,musharraf,raptors,hansakochcom,yoonew,larok,bigbroafrica,hacking,napier,surajkala,bears,collectdesign,machine,Israeli,beyonce,wistia,ixigo,tajim,bicycle,federer,mbaclub,Arenys,budgeting,GoogleCodeJam,syndeomedia,teamtaylor,hagos,pandu,traeblain,chrisbrogan,edinburgh,shophtml,dingman,egypt,caclubindia,hansakoch,caclub,amandachapel,syndeolabs,brady,croatia,Soicla,profermics,miage,buffalo,hanskoch,arduino,additude,asshole,musichackday,rakhisawant,imparatta,suraj,Bulgaria,slept,geekandpoke,gwhatchet,cerevo,marcec,gitorious,spore,stafon,todesking,kettlemonkey,wildparty,Halloween,bookmyshow,HadekanX2,mygenius,bdutt,invitation,indian,Yossi,assam,gauhati,holga,tvduell,googlewave,artificial,jboss,umezawakazuki,thong,Searched,TravelCenters,public,leiberman,healthcare,StantonStreet,option,senate,BRAVIA,elrowan,boeing,sternbr,tickets,scotsbob,epona,anahi,tunisie,ubuntu9,ubbin,magic,rer20009,chumby,afghanistan,makayama,traffic,trafficsupport,jonikahara,byouinsan,paeae,cleartrip,ironman,cchsm,commontown,video,PenultimosDias,hvelarde,BrianWancho,flightstats,flight,quelle,YoaniSanchez,bandcontest,Xperia,sha_feng,discord,phweet,stock,Finland,Kotka,yahoo,dewet,centrelink,siegburg,tqubed,moosecox,podcast,googlevoice,gooberdlx,kindohm

And here is the response that we get :
413. HTTP/1.1 413 Request Entity Too Large


I am using to make that query our account "superfeedr_trac", which is
authorized. (When I try to authorize again, I get the following
message : "You're already set to access the Streaming API."

Thanks for your help.
Julien



On Nov 12, 8:16 am, Julien  wrote:
> Thanks John, I'll review all that and we'll post more info soon.
>
> Thanks for listening!
>
> On Nov 12, 6:39 am, John Kalucki  wrote:
>
>
>
> > 413 usually means "too long" on the Streaming API. Too many
> > predicates, or perhaps a URL of crazy length. This is documented in
> > the wiki.
>
> > First, be sure that you are using a POST parameter and not encoding
> > your predicates in the URL. Second, look at the text message that is
> > returned with the 413. It will tell you what you are doing wrong. Most
> > likely, you are attempting to follow more users than the Shadow role
> > allows.
>
> > Detail you

[twitter-dev] Re: 413 FULL head error on Streaming API

2009-11-12 Thread Julien

Indeed... sorry about that!

On Nov 12, 10:52 am, Cameron Kaiser  wrote:
> > For some reason, my previous post didn't show up :/
>
> That's because you're a new poster and the volunteer mods have to approve
> them first. Give us a chance :)
>
> --
>  personal:http://www.cameronkaiser.com/--
>   Cameron Kaiser * Floodgap Systems *www.floodgap.com* ckai...@floodgap.com
> -- It's lonely at the top, but the food is better. 
> 


[twitter-dev] Re: 413 FULL head error on Streaming API

2009-11-14 Thread Julien

John, Cameron... Any clue?



On Nov 12, 11:05 am, Julien  wrote:
> Indeed... sorry about that!
>
> On Nov 12, 10:52 am, Cameron Kaiser  wrote:
>
>
>
> > > For some reason, my previous post didn't show up :/
>
> > That's because you're a new poster and the volunteer mods have to approve
> > them first. Give us a chance :)
>
> > --
> >  
> > personal:http://www.cameronkaiser.com/--
> >   Cameron Kaiser * Floodgap Systems *www.floodgap.com*ckai...@floodgap.com
> > -- It's lonely at the top, but the food is better. 
> > 


[twitter-dev] Re: 413 FULL head error on Streaming API

2009-11-15 Thread Julien

Thanks John,

I am not sure I understand when you say "There is no single role in
the Streaming API that will allow that many  follows and track
parameters.", your documentation says :
"The default access level allows up to 200 track keywords and 400
follow userids. Increased access levels allow 80,000 follow userids
("shadow" role), 400,000 follow userids ("birddog" role), 10,000 track
keywords ("restricted track" role), and 200,000 track keywords
("partner track" role). Increased track access levels also pass a
higher proportion of statuses before limiting the stream."

Ryan Sarver said he granted us the "shadow" role yet (8 usersid),
we can track more than 400 userid...

What are our remaining options? Polling doesn't work (you block our
requests, and you say, (-but your doc says something different) that
we can use your streaming API.

Julien



On Nov 15, 7:01 am, John Kalucki  wrote:
> I believe I answered this question on Nov 12 (message 2) in this
> thread.
>
> http://groups.google.com/group/twitter-development-talk/tree/browse_f...
>
> There is no single role in the Streaming API that will allow that many
> follows and track parameters.
>
> -John
>
> On Nov 12, 9:21 am, Julien  wrote:
>
>
>
> > John,
>
> > This is exactly what we post to your servers (I just hid the
> > Authorization) :
>
> > POST /1/statuses/filter.json HTTP/1.1
> > Host: stream.twitter.com
> > User-agent: TwitterStream
> > Authorization: Basic X==
> > Content-type: application/x-www-form-urlencoded
> > Content-length: 13609
>
> > follow=80706782,80706249,59397117,813276,1750351,1497,14303772,15640990,106 
> > 30,1581511,794545,12,16718628,15351161,55033131,22207903,956501,16028823,98 
> > 2721,14100016,4411621,14615776,648563,26383279,21161414,10239,811350,984381 
> > 2,16540939,20,25806143,15889416,10128922,13955972,31953,7337062,32653,10423 
> > 61,26020994,1714051,7544012,309073,32063,14098218,33205299,1392281,664153,1 
> > 5328268,6717392,611823,15913,21169368,6892002,731253,7281962,14956807,54695 
> > 12,13885092,36502008,15503041,5381582,26566923,2876271,14083455,18029701,10 
> > 91741,8256162,34174320,16418101,17544169,22784949,5529162,813286,7136992,47 
> > 685641,20935486,15864599,14712874,469163,14134376,13370272,1501,5973812,151 
> > 33601,24338507,14454247,1748581,12858,13331942,1799511,10425502,24103,42833 
> > 3,10365,15279944,14229661,27663774,14692721,10226672,14363100,25694882,9536 
> > 542,14418757,14114731,11900,15352135,18399826,22449663,7889672,25963448,556 
> > 3012,66713497,1786041,19643660,17919393,14090674,14956888,25808898,59194896 
> > ,2048141,15111776,16426292,45733,20201051,65973559,17781981,52680804,644093 
> > 99,1367531,31132926,16450330,23853857,14075928,43856654,28156155,40197535,1 
> > 5773675,41112544,28174228,20526944,63407440,22802398,67899674,6017542,18120 
> > 198,52589720,24855923,17829758,33233205,15945424,15954704,45075974,4404 
> > ,17409240,22797985,14323759,26072066,614133,11881852,21655440,37879306,1652 
> > 541,6368672,29925623,31353077,6821102,15876379,14634720,37405382,15838599,1 
> > 4790192,711303,23753940,59726910,17819913,18866407,17113231,17124895,194730 
> > 1,5741722,27456531,1717,42033487,11856032,53372599,11857072,31114028,15 
> > 996830,41075473,65904217,759251,23981347,14516048,27500565,16890969,1697024 
> > 8,15880642,16129920,42402819,45369197,31329385,5402612,13838562,14803917,84 
> > 68872,14372478,65632985,14292132,80427603,38400130,22910295,77877070,488403 
> > 81,3216921,612473,80713872,42056836,20153343,1705201,20924090,81391288,1943 
> > 8765,104203,14089573,4700381,7400702,33684457,37874853,14496232,7235522,741 
> > 1552,71508034,15072502,33584794,15237935,81025521,22297824,7692692,18824787 
> > ,18949452,7161542,22301030,27565488,14439930,1975321,1830491,46376581,19945 
> > 693,23235489,14239496,3850041,22021097,43417156,12942,14162945,18159833,138 
> > 27442,21645304,15708435,794928,15572963,11894422,456413,5579402,5990942,203 
> > 99599,16297742,8085962,38961973,20631151,41375788,16416613,60334210,1670572 
> > 6,14305706,3128561,6893672,2860101,19039063,63810996,22651121,42860263,9379 
> > 61,50270097,36951816,1186,14613750,18799530,12974922,22543092,18991180,1525 
> > 1361,7408842,56458858,23530011,5567,48403197,586,15204168,17028306,2318591, 
> > 20389644,7314182,14704003,14438790,42096373,45891150,15237382,60098511,2709 
> > 6075,19059514,14438014,4234581,15097328,50751569,246,54911372,770289,440113 
> > 51,20194429,20365897,22540123,24203163,25489250,15533871,14166131,35136170, 
&

[twitter-dev] Re: 413 FULL head error on Streaming API

2009-11-15 Thread Julien

Thanks a lot John for this clear explaination.
Is it possible to have the following settings:
- superfeedr_foll as Shadow
- superfeedr_trac as TrackRestricted

That would truly be awesome. I will change our implementation so that
we use 2 different accounts for the 2 purposes.
Again, thx.

Julien


On Nov 15, 11:29 am, John Kalucki  wrote:
> The default access levels are track 200 and follow 400.
>
> Shadow increases follow to 80k, but leaves track at the default of
> 200.
> TrackRestricted increases track keywords to 10k but leaves follow at
> the default of 400.
>
> All of the other mentioned roles do the same, they increase access on
> one dimension, but leave the other dimension at the default. There is
> no role currently available that increases both the track and the
> follow limit. One could successfully argue that there should be such a
> role. In the mean time, we suggest that the rare service that requires
> both increased track and increased follow access obtain elevated roles
> on two different accounts and connect twice. While this initially
> appears annoying, it allows functionally partitioned predicate refresh
> periodicity.
>
> -John Kaluckihttp://twitter.com/jkalucki
> Services, Twitter Inc.
>
> On Nov 15, 9:47 am, Julien  wrote:
>
>
>
> > Thanks John,
>
> > I am not sure I understand when you say "There is no single role in
> > the Streaming API that will allow that many  follows and track
> > parameters.", your documentation says :
> > "The default access level allows up to 200 track keywords and 400
> > follow userids. Increased access levels allow 80,000 follow userids
> > ("shadow" role), 400,000 follow userids ("birddog" role), 10,000 track
> > keywords ("restricted track" role), and 200,000 track keywords
> > ("partner track" role). Increased track access levels also pass a
> > higher proportion of statuses before limiting the stream."
>
> > Ryan Sarver said he granted us the "shadow" role yet (8 usersid),
> > we can track more than 400 userid...
>
> > What are our remaining options? Polling doesn't work (you block our
> > requests, and you say, (-but your doc says something different) that
> > we can use your streaming API.
>
> > Julien
>
> > On Nov 15, 7:01 am, John Kalucki  wrote:
>
> > > I believe I answered this question on Nov 12 (message 2) in this
> > > thread.
>
> > >http://groups.google.com/group/twitter-development-talk/tree/browse_f...
>
> > > There is no single role in the Streaming API that will allow that many
> > > follows and track parameters.
>
> > > -John
>
> > > On Nov 12, 9:21 am, Julien  wrote:
>
> > > > John,
>
> > > > This is exactly what we post to your servers (I just hid the
> > > > Authorization) :
>
> > > > POST /1/statuses/filter.json HTTP/1.1
> > > > Host: stream.twitter.com
> > > > User-agent: TwitterStream
> > > > Authorization: Basic X==
> > > > Content-type: application/x-www-form-urlencoded
> > > > Content-length: 13609
>
> > > > follow=80706782,80706249,59397117,813276,1750351,1497,14303772,15640990,106
> > > >  
> > > > 30,1581511,794545,12,16718628,15351161,55033131,22207903,956501,16028823,98
> > > >  
> > > > 2721,14100016,4411621,14615776,648563,26383279,21161414,10239,811350,984381
> > > >  
> > > > 2,16540939,20,25806143,15889416,10128922,13955972,31953,7337062,32653,10423
> > > >  
> > > > 61,26020994,1714051,7544012,309073,32063,14098218,33205299,1392281,664153,1
> > > >  
> > > > 5328268,6717392,611823,15913,21169368,6892002,731253,7281962,14956807,54695
> > > >  
> > > > 12,13885092,36502008,15503041,5381582,26566923,2876271,14083455,18029701,10
> > > >  
> > > > 91741,8256162,34174320,16418101,17544169,22784949,5529162,813286,7136992,47
> > > >  
> > > > 685641,20935486,15864599,14712874,469163,14134376,13370272,1501,5973812,151
> > > >  
> > > > 33601,24338507,14454247,1748581,12858,13331942,1799511,10425502,24103,42833
> > > >  
> > > > 3,10365,15279944,14229661,27663774,14692721,10226672,14363100,25694882,9536
> > > >  
> > > > 542,14418757,14114731,11900,15352135,18399826,22449663,7889672,25963448,556
> > > >  
> > > > 3012,66713497,1786041,19643660,17919393,14090674,14956888,2580889

[twitter-dev] Re: 413 FULL head error on Streaming API

2009-11-15 Thread Julien

Ok I sent an email, but Ryan takes forever to respond to my emails :/
Isn't there anything you could do?

thanks,

On Nov 15, 6:34 pm, John Kalucki  wrote:
> Please request the additional access by whatever means you requested
> your initial access.
>
> -John
>
> On Nov 15, 3:35 pm, Julien  wrote:
>
>
>
> > Thanks a lot John for this clear explaination.
> > Is it possible to have the following settings:
> > - superfeedr_foll as Shadow
> > - superfeedr_trac as TrackRestricted
>
> > That would truly be awesome. I will change our implementation so that
> > we use 2 different accounts for the 2 purposes.
> > Again, thx.
>
> > Julien
>
> > On Nov 15, 11:29 am, John Kalucki  wrote:
>
> > > The default access levels are track 200 and follow 400.
>
> > > Shadow increases follow to 80k, but leaves track at the default of
> > > 200.
> > > TrackRestricted increases track keywords to 10k but leaves follow at
> > > the default of 400.
>
> > > All of the other mentioned roles do the same, they increase access on
> > > one dimension, but leave the other dimension at the default. There is
> > > no role currently available that increases both the track and the
> > > follow limit. One could successfully argue that there should be such a
> > > role. In the mean time, we suggest that the rare service that requires
> > > both increased track and increased follow access obtain elevated roles
> > > on two different accounts and connect twice. While this initially
> > > appears annoying, it allows functionally partitioned predicate refresh
> > > periodicity.
>
> > > -John Kaluckihttp://twitter.com/jkalucki
> > > Services, Twitter Inc.
>
> > > On Nov 15, 9:47 am, Julien  wrote:
>
> > > > Thanks John,
>
> > > > I am not sure I understand when you say "There is no single role in
> > > > the Streaming API that will allow that many  follows and track
> > > > parameters.", your documentation says :
> > > > "The default access level allows up to 200 track keywords and 400
> > > > follow userids. Increased access levels allow 80,000 follow userids
> > > > ("shadow" role), 400,000 follow userids ("birddog" role), 10,000 track
> > > > keywords ("restricted track" role), and 200,000 track keywords
> > > > ("partner track" role). Increased track access levels also pass a
> > > > higher proportion of statuses before limiting the stream."
>
> > > > Ryan Sarver said he granted us the "shadow" role yet (8 usersid),
> > > > we can track more than 400 userid...
>
> > > > What are our remaining options? Polling doesn't work (you block our
> > > > requests, and you say, (-but your doc says something different) that
> > > > we can use your streaming API.
>
> > > > Julien
>
> > > > On Nov 15, 7:01 am, John Kalucki  wrote:
>
> > > > > I believe I answered this question on Nov 12 (message 2) in this
> > > > > thread.
>
> > > > >http://groups.google.com/group/twitter-development-talk/tree/browse_f...
>
> > > > > There is no single role in the Streaming API that will allow that many
> > > > > follows and track parameters.
>
> > > > > -John
>
> > > > > On Nov 12, 9:21 am, Julien  wrote:
>
> > > > > > John,
>
> > > > > > This is exactly what we post to your servers (I just hid the
> > > > > > Authorization) :
>
> > > > > > POST /1/statuses/filter.json HTTP/1.1
> > > > > > Host: stream.twitter.com
> > > > > > User-agent: TwitterStream
> > > > > > Authorization: Basic X==
> > > > > > Content-type: application/x-www-form-urlencoded
> > > > > > Content-length: 13609
>
> > > > > > follow=80706782,80706249,59397117,813276,1750351,1497,14303772,15640990,106
> > > > > >  
> > > > > > 30,1581511,794545,12,16718628,15351161,55033131,22207903,956501,16028823,98
> > > > > >  
> > > > > > 2721,14100016,4411621,14615776,648563,26383279,21161414,10239,811350,984381
> > > > > >  
> > > > > > 2,16540939,20,25806143,15889416,10128922,13955972,31953,7337062,32653,10423
> > > > > >  
> &g

[twitter-dev] Track streaming : how to match tweets?

2009-12-02 Thread Julien
Hey,

I am pretty sure this is an issue that was raised by several people,
but I'd love to see if we can find a solution.

Right now, with the streaming API, I can track keywords, the problem,
when I deal with 5 different keywords is to identify which keyword a
tweet matches.

Say, I subscribe to the following keywords : julien,superfeedr,google

If I get a tweet, the only way to know what keyword it matches is to
compare all of its words to the words I'm tracking... (mayvbe there is
something easier).

That's quite "hard" but it becomes harder if I add operands. Say I
have a search "romeo+juliet". When I get a tweet, I need to compare it
to all the keywords, plus all the combinations :/ Technically that is
not even doable if i have more than 10 keywords, since there are a LOT
of combinations possible.

What I'm suggesting is basically that Twitter would tell me which
keyword this tweet matches. Twitter has the information, since it
sends me only this specific tweet, right? That would definitely change
the schema a little bit, but makes things easier for a lot of people
and the investment is not so big on your side I think

Doable?



[twitter-dev] Re: Streaming Api - Keywords matched

2009-12-02 Thread Julien
I kind of disagree with you here... not because it's hard to match the
users (the algo you offered is what we use) but because you assume
that queries will juts match 1 single keyword.

I think this is not doable if you start introducing things like + or &
or || or "", because you need to compare a finite list of token + 1
infinite (or almost!) list of combined tokens...

Julien



On Nov 3, 11:41 pm, John Kalucki  wrote:
> May I suggest a potentially much more efficient algorithm? Place all
> keywords in a HashMap that maps keywords to a list of subscribed
> users. Tokenize the status text, and look up each token in the hash
> table to deliver the status to each subscribed user. Within the user,
> apply a generational filter to prevent duplicate deliveries of the
> same status. The statusid as an opaque marker works just fine assuming
> single-threaded operation or an appropriately scoped critical section
> that atomically completes status delivery to all users. You cannot
> assume strictly increasing statusids, so arithmetic comparison other
> than equality is a doomed generational index.
>
> This is how the Streaming API implements track (among other things).
> Your client is performing the same streaming operations to demultiplex
> the stream into your client streams as the Streaming API does to the
> Firehose to create your stream. The cost is nearly fixed, as there are
> only so many tokens per status. You are limited entirely by memory, as
> you can quickly forward statuses to a large number of clients
> following a nearly limitless set of keywords.
>
> -John Kaluckihttp://twitter.com/jkalucki
> Services, Twitter Inc.
>
> On Nov 3, 9:59 am, FabienPenso wrote:
>
>
>
> > I agree, however it would help a lot because instead of doing :
>
> > for keyword in all_keywords
> >  if tweet.match(keyword)
> >   //matched, notify users
> >  end
> > end
>
> > we could do
>
> > for keyword in keywords_matched
> >  // same as above
> > end
>
> > for matching 5,000 keywords, it would bring the first loop from 5,000
> > to probably 1 or 2.
> > You know what you matched, so it's quiet easy for you just to include
> > row data of matched keywords, I don't need anything fancy. Just space
> > separated keywords would help _so much_.
>
> > On Tue, Nov 3, 2009 at 3:15 PM, John Kalucki  wrote:
>
> > > The assumption is that client services will, in any case, have to
> > > parse and route statuses to potentially multiple end-users. Providing
> > > this sort of hint wouldn't eliminate the need to parse the status and
> > > would likely result in duplicate effort. We're aware that we are, in
> > > some use cases, externalizing development effort, but the uses cases
> > > for the Streaming API are so many, that it's hard to define exactly
> > > how much this feature would help and therefore how much we're
> > > externalizing.
>
> > > -John Kalucki
> > >http://twitter.com/jkalucki
> > > Services, Twitter Inc.
>
> > > On Nov 3, 1:53 am, FabienPenso wrote:
> > >> Hi.
>
> > >> Would it be possible to include the matched keywords in another field
> > >> within the result from the streaming/keyword API?
>
> > >> It would prevent matching those myself when matching for multiple
> > >> internal users, to spread the tweets to the legitimate users, which
> > >> can be time consuming and tough to do on lots of users/keywords.
>
> > >> Thanks.


[twitter-dev] Re: Track streaming : how to match tweets?

2009-12-03 Thread Julien
Well, then I'd need some help with that...

Again, it's easy with single search keywords, but I haven't found a
solution for combined searches like twitter+stream or photo+Paris...
because I would have to compare each combination of tokens in the
tweet...

Can someone give more details.

I am not sure why I'd still need to match the keywords on my side
either... if you cna tell me which ones it matches.

Thanks,



On Dec 3, 9:05 am, Dave Sherohman  wrote:
> On Wed, Dec 02, 2009 at 03:15:21PM -0800, Julien wrote:
> > If I get a tweet, the only way to know what keyword it matches is to
> > compare all of its words to the words I'm tracking... (mayvbe there is
> > something easier).
>
> > That's quite "hard" but it becomes harder if I add operands. Say I
> > have a search "romeo+juliet". When I get a tweet, I need to compare it
> > to all the keywords, plus all the combinations :/ Technically that is
> > not even doable if i have more than 10 keywords, since there are a LOT
> > of combinations possible.
>
> You are mistaken.  Provided you have appropriate support from your
> language or its libraries, accomplishing this is trivial.  Using Perl
> and Regexp::Assemble, FishTwits is currently tracking 1,358 words/
> phrases and, for each tweet, building a list of which words/phrases
> appear in that tweet.  It's very doable (quick, even), despite having
> far more than 10 keywords involved.
>
> --
> Dave Sherohman


[twitter-dev] Re: Track streaming : how to match tweets?

2009-12-05 Thread Julien
Thanks Dave,

I think I get it from your example... yet, in our case, we have
several thousands of keywords, and many many complex searches (with
filter:, "and", "or", :near ... an so on).

I keep thinking that instead of re-implementing on my side the search
engine logic that Twitter has, it would be simpler for them to also
send the macthing keywords. And even more elegant solution (yet
slightly more complex) would be to be able to parse parameters along
with the search I give, such as a unique search_id (that I can store
on my side) and then, instead of giving me the matched keywords/search
terms, they could just give me back that search_id. That would be
something like this :

Right now it is :
POST  http://stream.twitter.com/1/statuses/filter.json
track=paris,twitter+superfeedr,"julien near:france"

It would be awesome if I could do :
POST  http://stream.twitter.com/1/statuses/filter.json
track={"paris":"my_search_1","twitter
+superfeedr":"my_search_2","julien near:france":"my_search_3"}

And then, upon notifications, they would just pass me this search key
my_search_xx

I know and understand and implies a little bit of work for Twitter,
but it also removes the pain from each susbcriber to this streaming
API who has to re-implement again and again the "search engine" from
Twitter.






On Dec 4, 11:33 am, Dave Sherohman  wrote:
> On Thu, Dec 03, 2009 at 03:12:05PM -0800, Julien wrote:
> > Well, then I'd need some help with that...
>
> > Again, it's easy with single search keywords, but I haven't found a
> > solution for combined searches like twitter+stream or photo+Paris...
> > because I would have to compare each combination of tokens in the
> > tweet...
>
> > Can someone give more details.
>
> I don't mean to be flogging my site today, but take a look 
> athttp://fishtwits.comfor the results I'm producing (just click the logo
> at the top of the page to view the full site without logging in):  Any
> tweets from users followed by FishTwits are scanned for fishing-related
> terms and all such terms found in the tweet are displayed below it.  At
> this moment, for instance, the first displayed tweet shows matches for
> both "Fly Fishing" and "Sole".
>
> This is accomplished with the following Perl code (edited to remove
> parts which aren't directly relevant):
>
> sub load_from_text {
>   my ($class, $text) = @_;
>
>   unless($topic_regex) {
>     require Regexp::Assemble;
>     my $ra = Regexp::Assemble->new(
>                chomp => 0,
>                anchor_word_begin => 1,
>                anchor_word_end => 1,
>              );
>     for my $topic (@topic_list) {
>       $ra->add(lc $topic);
>     }
>     $topic_regex = $ra->re;
>   }
>
>   $text = lc $text;
>   my @topics = $text =~ /$topic_regex/g;
>
>   return sort @topics;
>
> }
>
> It first uses Regexp::Assemble to build a $topic_regex[1] which will
> match any of the words/phrases found in the topic table, then does a
> global match of $text (the body of the tweet being examined) against
> $topic_regex, capturing all matches into the array @topics, which is
> then sorted and returned to the caller.
>
> After the match is performed, @topics contains every search term which
> is matched, no matter how many there may be, which should fill your
> requirement for "combined searches", unless I'm misunderstanding it.
>
> If you mean you would want that "Fly Fishing", "Sole" tweet to return
> three hits rather than two ("Fly Fishing", "Sole", "Fly Fishing+Sole"),
> that's easy enough to create from @topics, just generate every
> permutation of the terms which the individual tweet matched.
>
> [1]  If you're only dealing with 10 or so keywords, you'd probably be
> just as well off building the regex by hand.  The main reason I'm using
> Regexp::Assemble to do it on the fly is because manually creating and
> then maintaining a regex that will efficiently match any of 1300 terms
> would be a nightmare.
>
> --
> Dave Sherohman


[twitter-dev] Re: Track streaming : how to match tweets?

2009-12-07 Thread Julien
Hum... ok... sad, but I have an idea. Please tell me if this is
stupid.

So, for each tweet I receive, I know what searches it _may_ match.
Right?
So, with all these "candidates" query, what I can do is perform them
against the regular search API (as long as they're complex). If the
result from the polling includes them, then, I know that the searches
matches and I don't have to build anything on top of what you built.

Let's take an example :
-  If I have a search for "starbuck AND free near:94123"
- I track "starbuck" with the streaming API
- Whenever you guys send me a tweet for this track
-  I check internally all the queries that may match Starbucks
- I perform them on your API
- if the tweet you sent me is in the results, then I know this tweet
is valid,
- if not, I discard it.

My only concern here is the 20k/hour limit. I think this is still
doable, because
1) we will only make queries to the search API when we receive
notifications
2) we will only make queries to the search API for complex queries
(IE : AND, +, "" or near:

The pros :
- whener you guys change/add stuff to your search DSL, I don't have to
change anything on my side.

How does that sound?

Thanks John anyway for your great help!

Julien


On Dec 5, 3:32 pm, John Kalucki  wrote:
> This could only make sense if the Streaming API supported "search engine
> logic". Currently Streaming only supports keyword matching -- you have to
> post-process to add additional predicate operators beyond OR. You can
> reproduce the keyword match in a few lines of code, and the rest is
> (currently) all up to you anyway. Just remember that a given tweet could
> have triggered multiple predicates.
>
> Beyond being a low priority feature, rendering and delivering custom
> responses per user would be a performance risk. We currently can support a
> very large number of filter clients per server, and we want to preserve this
> performance.
>
> -John Kaluckihttp://twitter.com/jkalucki
> Services, Twitter Inc.
>
>
>
> On Sat, Dec 5, 2009 at 3:18 AM, Julien  wrote:
> > Thanks Dave,
>
> > I think I get it from your example... yet, in our case, we have
> > several thousands of keywords, and many many complex searches (with
> > filter:, "and", "or", :near ... an so on).
>
> > I keep thinking that instead of re-implementing on my side the search
> > engine logic that Twitter has, it would be simpler for them to also
> > send the macthing keywords. And even more elegant solution (yet
> > slightly more complex) would be to be able to parse parameters along
> > with the search I give, such as a unique search_id (that I can store
> > on my side) and then, instead of giving me the matched keywords/search
> > terms, they could just give me back that search_id. That would be
> > something like this :
>
> > Right now it is :
> > POST  http://stream.twitter.com/1/statuses/filter.json
> > track=paris,twitter+superfeedr,<http://stream.twitter.com/1/statuses/filter.json%0Atrack=paris,twitte...,>"julien
> > near:france"
>
> > It would be awesome if I could do :
> > POST  http://stream.twitter.com/1/statuses/filter.json
> > track={"paris":"my_search_1","twitter
> > +superfeedr":"my_search_2","julien near:france":"my_search_3"}
>
> > And then, upon notifications, they would just pass me this search key
> > my_search_xx
>
> > I know and understand and implies a little bit of work for Twitter,
> > but it also removes the pain from each susbcriber to this streaming
> > API who has to re-implement again and again the "search engine" from
> > Twitter.
>
> > On Dec 4, 11:33 am, Dave Sherohman  wrote:
> > > On Thu, Dec 03, 2009 at 03:12:05PM -0800, Julien wrote:
> > > > Well, then I'd need some help with that...
>
> > > > Again, it's easy with single search keywords, but I haven't found a
> > > > solution for combined searches like twitter+stream or photo+Paris...
> > > > because I would have to compare each combination of tokens in the
> > > > tweet...
>
> > > > Can someone give more details.
>
> > > I don't mean to be flogging my site today, but take a look
> > athttp://fishtwits.comforthe results I'm producing (just click the logo
> > > at the top of the page to view the full site without logging in):  Any
> > > tweets from users followed by FishTwits are scanned for fishing-related
> > > terms and all such terms found in the tweet are displayed below it.  At
> > > this moment, for instance, the first

[twitter-dev] Re: Track streaming : how to match tweets?

2009-12-08 Thread Julien
Thanks Mark, but as I said, we need to fetch more complex feeds to. So
we'll use the OR with the simple query, and then query the search API
with the complex query to see if a given tweet matches what we need!

Julien

On Dec 8, 12:55 am, Mark McBride  wrote:
> Note that search API whitelisting is different from regular API
> whitelisting, and getting a 20k hour limit there is much more
> restrictive.
>
> I still haven't seen a case where you couldn't do the matching on your
> side.  As John says, with the streaming API right now you can only
> match simple terms, so the complex terms aren't a factor.  In fact the
> track you posted won't actually function as you intend with the
> streaming API.  You could track for tweets containing starbucks or
> free.  But currently that's it.  "starbucks AND free" is something
> you'd have to implement on your side.  Same with near.
>
>
>
>
>
> On Mon, Dec 7, 2009 at 3:45 PM,Julien wrote:
> > Hum... ok... sad, but I have an idea. Please tell me if this is
> > stupid.
>
> > So, for each tweet I receive, I know what searches it _may_ match.
> > Right?
> > So, with all these "candidates" query, what I can do is perform them
> > against the regular search API (as long as they're complex). If the
> > result from the polling includes them, then, I know that the searches
> > matches and I don't have to build anything on top of what you built.
>
> > Let's take an example :
> > -  If I have a search for "starbuck AND free near:94123"
> > - I track "starbuck" with the streaming API
> > - Whenever you guys send me a tweet for this track
> > -  I check internally all the queries that may match Starbucks
> > - I perform them on your API
> > - if the tweet you sent me is in the results, then I know this tweet
> > is valid,
> > - if not, I discard it.
>
> > My only concern here is the 20k/hour limit. I think this is still
> > doable, because
> > 1) we will only make queries to the search API when we receive
> > notifications
> > 2) we will only make queries to the search API for complex queries
> > (IE : AND, +, "" or near:
>
> > The pros :
> > - whener you guys change/add stuff to your search DSL, I don't have to
> > change anything on my side.
>
> > How does that sound?
>
> > Thanks John anyway for your great help!
>
> >Julien
>
> > On Dec 5, 3:32 pm, John Kalucki  wrote:
> >> This could only make sense if the Streaming API supported "search engine
> >> logic". Currently Streaming only supports keyword matching -- you have to
> >> post-process to add additional predicate operators beyond OR. You can
> >> reproduce the keyword match in a few lines of code, and the rest is
> >> (currently) all up to you anyway. Just remember that a given tweet could
> >> have triggered multiple predicates.
>
> >> Beyond being a low priority feature, rendering and delivering custom
> >> responses per user would be a performance risk. We currently can support a
> >> very large number of filter clients per server, and we want to preserve 
> >> this
> >> performance.
>
> >> -John Kaluckihttp://twitter.com/jkalucki
> >> Services, Twitter Inc.
>
> >> On Sat, Dec 5, 2009 at 3:18 AM,Julien wrote:
> >> > Thanks Dave,
>
> >> > I think I get it from your example... yet, in our case, we have
> >> > several thousands of keywords, and many many complex searches (with
> >> > filter:, "and", "or", :near ... an so on).
>
> >> > I keep thinking that instead of re-implementing on my side the search
> >> > engine logic that Twitter has, it would be simpler for them to also
> >> > send the macthing keywords. And even more elegant solution (yet
> >> > slightly more complex) would be to be able to parse parameters along
> >> > with the search I give, such as a unique search_id (that I can store
> >> > on my side) and then, instead of giving me the matched keywords/search
> >> > terms, they could just give me back that search_id. That would be
> >> > something like this :
>
> >> > Right now it is :
> >> > POST  http://stream.twitter.com/1/statuses/filter.json
> >> > track=paris,twitter+superfeedr,<http://stream.twitter.com/1/statuses/filter.json%0Atrack=paris,twitte...,>"julien
> >> > near:france"
>
> >> > It would be awesome if I could do :
> >> > POST  http://str

[twitter-dev] A PubSubHubbub hub for Twitter

2010-03-01 Thread Julien
Ola!

I know this s some kind of recurring topic for this mailing list. I
know all the heat around it, but I think that Twitter's new strategy
concerning their firehose is a good occasion to push them to implement
the PubSubHubbub protocol.

Superfeedr makes RSS feeds realtime. We host hubs for several big
publishers, including Tumblr, Posterous, HuffingtonPost, Gawker and
several others.

We want to make one for Twitter. Help us assessing the need and
convince Twitter they need one (hosted by us or even them, if they'd
rather go down that route) :

http://bit.ly/hub4twitter

Any comment/suggestion is more than welcome.


[twitter-dev] Re: A PubSubHubbub hub for Twitter

2010-03-01 Thread Julien
Ed,

On Mar 1, 5:23 pm, "M. Edward (Ed) Borasky"  wrote:
> In light of today's announcement, I'm not sure what the benefits of a
> "middleman" would be.
>
> http://blog.twitter.com/2010/03/enabling-rush-of-innovation.html
>
> Can you clarify
>
> a. How much it would cost me to get Twitter data from you via
> PubSubHubbub vs. getting the feeds directly from Twitter?
Free, obviously... as with the use of any hub we host!

> b. What benefits there are to acquiring Twitter data via PubSubHubbub
> over direct access?
Much simpler to deal with than a specific streaming Twitter API,
specifically if your app has already implemented the protocol for
Identica, Buzz, Tumblr, sixapart, posterous, google reader... it's all
about "standards".




>
> On Mar 1, 3:08 pm, Julien  wrote:
>
>
>
> > Ola!
>
> > I know this s some kind of recurring topic for this mailing list. I
> > know all the heat around it, but I think that Twitter's new strategy
> > concerning their firehose is a good occasion to push them to implement
> > the PubSubHubbub protocol.
>
> > Superfeedr makes RSS feeds realtime. We host hubs for several big
> > publishers, including Tumblr, Posterous, HuffingtonPost, Gawker and
> > several others.
>
> > We want to make one for Twitter. Help us assessing the need and
> > convince Twitter they need one (hosted by us or even them, if they'd
> > rather go down that route) :
>
> >http://bit.ly/hub4twitter
>
> > Any comment/suggestion is more than welcome.


[twitter-dev] Re: A PubSubHubbub hub for Twitter

2010-03-02 Thread Julien
Andrew, it's not so much about making a "simpler" API, but making it
standard : having the same API to get content from 6A blogs, Tumblr's
blogs, media sites, social networks... is much easier than
implementing one for each service out there.

After a small day of poll, here are some results :

Do you currently use the Twitter Streaming API?
Yes 18  53%
No  16  47%

Would you use a Twitter PubSubHubbub hub if it was available?
Yes 33  97%
No  1   3%

Have you already implemented PubSubHubbub?
Yes 24  71%
No  10  29%


Obviously, 34 is _not_ a big enough number that I think we have a
representative panel of respondant, but we also have "big" names in
here, (including some who have access in the firehose), which makes me
think that PubSubHubbub should be a viable option for Twitter.

If you read this, please take some take to respond :

http://bit.ly/hub4twitter

Thanks all.

Cheers,

Julien


On Mar 1, 9:02 pm, Andrew Badera  wrote:
> But how much simpler does it need to be? The streaming API is dead
> simple. I implemented what seems to be a full client with delete,
> limit and backoff in parts of two working days. Honestly I think it
> took me longer to write a working PubSubHubbub subscriber client than
> it did a Twitter Streaming API client.
>
> It would be nice if the world was full of free data and universal
> standards, but if it ain't broke, and it's already invested in, why
> fix it?
>
> ∞ Andy Badera
> ∞ +1 518-641-1280 Google Voice
> ∞ This email is: [ ] bloggable [x] ask first [ ] private
> ∞ Google me:http://www.google.com/search?q=andrew%20badera
>
>
>
> On Mon, Mar 1, 2010 at 8:44 PM, Julien  wrote:
> > Ed,
>
> > On Mar 1, 5:23 pm, "M. Edward (Ed) Borasky"  wrote:
> >> In light of today's announcement, I'm not sure what the benefits of a
> >> "middleman" would be.
>
> >>http://blog.twitter.com/2010/03/enabling-rush-of-innovation.html
>
> >> Can you clarify
>
> >> a. How much it would cost me to get Twitter data from you via
> >> PubSubHubbub vs. getting the feeds directly from Twitter?
> > Free, obviously... as with the use of any hub we host!
>
> >> b. What benefits there are to acquiring Twitter data via PubSubHubbub
> >> over direct access?
> > Much simpler to deal with than a specific streaming Twitter API,
> > specifically if your app has already implemented the protocol for
> > Identica, Buzz, Tumblr, sixapart, posterous, google reader... it's all
> > about "standards".
>
> >> On Mar 1, 3:08 pm, Julien  wrote:
>
> >> > Ola!
>
> >> > I know this s some kind of recurring topic for this mailing list. I
> >> > know all the heat around it, but I think that Twitter's new strategy
> >> > concerning their firehose is a good occasion to push them to implement
> >> > the PubSubHubbub protocol.
>
> >> > Superfeedr makes RSS feeds realtime. We host hubs for several big
> >> > publishers, including Tumblr, Posterous, HuffingtonPost, Gawker and
> >> > several others.
>
> >> > We want to make one for Twitter. Help us assessing the need and
> >> > convince Twitter they need one (hosted by us or even them, if they'd
> >> > rather go down that route) :
>
> >> >http://bit.ly/hub4twitter
>
> >> > Any comment/suggestion is more than welcome.


[twitter-dev] Re: A PubSubHubbub hub for Twitter

2010-03-03 Thread Julien
All, we just posted the results on our blog :
http://blog.superfeedr.com/API/PubSubHubbub/Twitter/feeds/streaming/a-hub-for-twitter/

I'll also sent them to John Kalucki and Ryan Sarver. It's their time
to play :D

On Mar 2, 7:57 am, Julien  wrote:
> Andrew, it's not so much about making a "simpler" API, but making it
> standard : having the same API to get content from 6A blogs, Tumblr's
> blogs, media sites, social networks... is much easier than
> implementing one for each service out there.
>
> After a small day of poll, here are some results :
>
> Do you currently use the Twitter Streaming API?
> Yes             18      53%
> No              16      47%
>
> Would you use a TwitterPubSubHubbubhub if it was available?
> Yes             33      97%
> No              1       3%
>
> Have you already implementedPubSubHubbub?
> Yes             24      71%
> No              10      29%
>
> Obviously, 34 is _not_ a big enough number that I think we have a
> representative panel of respondant, but we also have "big" names in
> here, (including some who have access in the firehose), which makes me
> think thatPubSubHubbubshould be a viable option for Twitter.
>
> If you read this, please take some take to respond :
>
> http://bit.ly/hub4twitter
>
> Thanks all.
>
> Cheers,
>
> Julien
>
> On Mar 1, 9:02 pm, Andrew Badera  wrote:
>
>
>
> > But how much simpler does it need to be? The streaming API is dead
> > simple. I implemented what seems to be a full client with delete,
> > limit and backoff in parts of two working days. Honestly I think it
> > took me longer to write a workingPubSubHubbubsubscriber client than
> > it did a Twitter Streaming API client.
>
> > It would be nice if the world was full of free data and universal
> > standards, but if it ain't broke, and it's already invested in, why
> > fix it?
>
> > ∞ Andy Badera
> > ∞ +1 518-641-1280 Google Voice
> > ∞ This email is: [ ] bloggable [x] ask first [ ] private
> > ∞ Google me:http://www.google.com/search?q=andrew%20badera
>
> > On Mon, Mar 1, 2010 at 8:44 PM, Julien  wrote:
> > > Ed,
>
> > > On Mar 1, 5:23 pm, "M. Edward (Ed) Borasky"  wrote:
> > >> In light of today's announcement, I'm not sure what the benefits of a
> > >> "middleman" would be.
>
> > >>http://blog.twitter.com/2010/03/enabling-rush-of-innovation.html
>
> > >> Can you clarify
>
> > >> a. How much it would cost me to get Twitter data from you via
> > >>PubSubHubbubvs. getting the feeds directly from Twitter?
> > > Free, obviously... as with the use of any hub we host!
>
> > >> b. What benefits there are to acquiring Twitter data viaPubSubHubbub
> > >> over direct access?
> > > Much simpler to deal with than a specific streaming Twitter API,
> > > specifically if your app has already implemented the protocol for
> > > Identica, Buzz, Tumblr, sixapart, posterous, google reader... it's all
> > > about "standards".
>
> > >> On Mar 1, 3:08 pm, Julien  wrote:
>
> > >> > Ola!
>
> > >> > I know this s some kind of recurring topic for this mailing list. I
> > >> > know all the heat around it, but I think that Twitter's new strategy
> > >> > concerning their firehose is a good occasion to push them to implement
> > >> > thePubSubHubbubprotocol.
>
> > >> > Superfeedr makes RSS feeds realtime. We host hubs for several big
> > >> > publishers, including Tumblr, Posterous, HuffingtonPost, Gawker and
> > >> > several others.
>
> > >> > We want to make one for Twitter. Help us assessing the need and
> > >> > convince Twitter they need one (hosted by us or even them, if they'd
> > >> > rather go down that route) :
>
> > >> >http://bit.ly/hub4twitter
>
> > >> > Any comment/suggestion is more than welcome.


[twitter-dev] Re: annotations access

2010-09-04 Thread Julien C
Hi Chris,

if I got it right, the feature's indeed not generally available yet,
and its rollout does not rank very high in priority any more.

Would be great to hear more from Twitter, though.

Julien

On Sep 2, 2:21 am, Chris Anderson  wrote:
> Howdy,
>
> I'm building a Twitter client that needs to make use of annotations to
> avoid displaying duplicate tweets to the end-user (long story...).
>
> Do I need to do something special to get access to the annotations
> API? I think I am posting my annotations correctly, but I can't be
> sure, as they are not appearing when I read the statuses with curl, or
> in my user stream.
>
> Is anyone else out there successfully using annotations? Is the
> feature not generally available yet? If not, how does one go about
> getting on the beta group?
>
> Thanks in advance,
> Chris

-- 
Twitter developer documentation and resources: http://dev.twitter.com/doc
API updates via Twitter: http://twitter.com/twitterapi
Issues/Enhancements Tracker: http://code.google.com/p/twitter-api/issues/list
Change your membership to this group: 
http://groups.google.com/group/twitter-development-talk?hl=en


[twitter-dev] Re: users/lookup returns duplicates, missing records for valid users

2011-03-02 Thread David JULIEN
I have noticed this strange behaviour too (duplicated results and
unknown users). For instance, yesterday, when I tried to lookup for
user 44537294 (with two different accounts), I received during many
hours information about user 243784138, before receiving expected
result (around 17/18h UTC).

David

-- 
Twitter developer documentation and resources: http://dev.twitter.com/doc
API updates via Twitter: http://twitter.com/twitterapi
Issues/Enhancements Tracker: http://code.google.com/p/twitter-api/issues/list
Change your membership to this group: 
http://groups.google.com/group/twitter-development-talk


[twitter-dev] Re: Bigger avatar images for users/profile_image/twitter ?

2011-03-24 Thread Julien C
Yes, but it's not square, which I don't like. I've got the same
problem with Facebook, actually: the square version of the profile pic
is tiny.

On Mar 7, 6:22 pm, "Ken D."  wrote:
> > Avatars come in three sizes:
>
> >         mini = 24x24
> >         normal = 48x48
> >         bigger = 73x73
> >         reasonably_small = 128x128
>
> >http://a3.twimg.com/profile_images/361706538/mk1_mini.jpg
> >http://a3.twimg.com/profile_images/361706538/mk1_normal.jpg
> >http://a3.twimg.com/profile_images/361706538/mk1_bigger.jpg
> >http://a3.twimg.com/profile_images/361706538/mk1_reasonably_small.jpg
>
> The original seems to be available 
> athttp://a3.twimg.com/profile_images/361706538/mk1.jpg

-- 
Twitter developer documentation and resources: http://dev.twitter.com/doc
API updates via Twitter: http://twitter.com/twitterapi
Issues/Enhancements Tracker: http://code.google.com/p/twitter-api/issues/list
Change your membership to this group: 
http://groups.google.com/group/twitter-development-talk


[twitter-dev] Re : Re: New Photo upload feature: What's new & coming on the API side

2011-06-06 Thread Julien Larios
Hi there,

I've implemented in Picsi this new way of photo sharing on Twitter (along 
with Twitpic support) and it works fine (based on Twitter4J 2.2.3).
These pictures can be used in the 2 firsts Picsi apps: Media RSS export and 
ZIP backup

But Arnaud (or should I say 'Dear Raptor fan' ? ;), do you know if external 
picture hosting services (like Twipitc) will be made available via this API 
branch?
That would be great to grab all kind of photo via a single API syntax 
(instead of funky tweet parsing)

Thanks


-- 
Twitter developer documentation and resources: https://dev.twitter.com/doc
API updates via Twitter: https://twitter.com/twitterapi
Issues/Enhancements Tracker: https://code.google.com/p/twitter-api/issues/list
Change your membership to this group: 
https://groups.google.com/forum/#!forum/twitter-development-talk