[twitter-dev] Re: Data Mining timeline is empty

2009-05-15 Thread Doug Williams
This was report and is being tracked by issue 582:
http://code.google.com/p/twitter-api/issues/detail?id=582

Thanks,
Doug
--

Doug Williams
Twitter Platform Support
http://twitter.com/dougw




On Thu, May 14, 2009 at 7:28 PM, elversatile  wrote:

>
> I have Data Mining public timeline returning empty statuses node (in
> XML). It was stuck for 30 or so hours before (just like for everybody
> else), but now I'm not getting anything back. Is this a general
> problem for everybody you guys are working on, is it something
> specific to my account?
>


[twitter-dev] update_profile_background_image API call. Again

2009-05-15 Thread Voituk Vadim

Hi I`m getting strange errors on using update_profile_background_image
API call (see curl dump below)

The uploaded image size is 10Kb.
Also i`ve tried to use
  mqpro_glowdotsGray.br.jpg;type=image/jpeg
and
mqpro_glowdotsGray.br.jpg;type=image/jpg
notations and go the same result.

Using the same code for   update_profile_image call works perfectly.

What i`m doing wrong? Or is there any active issues related to this
API?

curl -v -F 'image=@/path/to/image/twallpapers/backgrounds/thumbs/
mqpro_glowdotsGray.br.jpg' --header 'Expect:' -u mylogin:mypassword
http://twitter.com/account/update_profile_background_image.xml
* About to connect() to twitter.com port 80
*   Trying 128.121.146.100... connected
* Connected to twitter.com (128.121.146.100) port 80
* Server auth using Basic with user 'mylogin'
> POST /account/update_profile_background_image.xml HTTP/1.1
> Authorization: Basic [hidden]
> User-Agent: curl/7.15.5 (i686-redhat-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8b 
> zlib/1.2.3 libidn/0.6.5
> Host: twitter.com
> Accept: */*
> Content-Length: 10662
> Content-Type: multipart/form-data; 
> boundary=5a35eee9279a
>
< HTTP/1.1 403 Forbidden
< Date: Fri, 15 May 2009 08:32:17 GMT
< Server: hi
< Last-Modified: Fri, 15 May 2009 08:32:17 GMT
< Status: 403 Forbidden
< Pragma: no-cache
< Cache-Control: no-cache, no-store, must-revalidate, pre-check=0,
post-check=0
< Content-Type: application/xml; charset=utf-8
< Content-Length: 203
< Expires: Tue, 31 Mar 1981 05:00:00 GMT
< X-Revision: e8cdf86372da838f4cb112e826cddcd374bcee16
< X-Transaction: 1242376337-96911-6692
< Set-Cookie: lang=; path=/
< Set-Cookie:
_twitter_sess=BAh7CToJdXNlcmkE7PAFAToTcGFzc3dvcmRfdG9rZW4iLTFjZGViYWQ2Yzhm
%250ANTBhZTE5ODExNWJjMmJhNTUxNTFiZmI1NDIxNjQ6B2lkIiVhOTlhMWViNDU4%250AMTdmNzJlMjdmNzY2MTllN2JhZGRmYSIKZmxhc2hJQzonQWN0aW9uQ29udHJv
%250AbGxlcjo6Rmxhc2g6OkZsYXNoSGFzaHsABjoKQHVzZWR7AA%253D
%253D--13dd462896dfa3b13448f4535612802942248a82; domain=.twitter.com;
path=/
< Vary: Accept-Encoding
< Connection: close


  /account/update_profile_background_image.xml
  There was a problem with your background image. Probably too
big.

* Closing connection #0


[twitter-dev] Re: Status ID closing in on maximum unsigned integer

2009-05-15 Thread Martin Dufort

And you can check on the progress of that Status ID on: www.twitpocalypse.com
Unless Twitter decides to blacklist us) for sending too many requests
to the public timeline API...

So Twitter, if you do so, please send me an email martin [insert at]
wherecloud [dot] com so we can work something out :-)

Martin - www.wherecloud.com


On May 13, 9:07 pm, Craig Hockenberry 
wrote:
> Let me be the first to say THANK YOU for this advance notice. I found
> and fixed some bugs in our iPhone client today because of it — and I'm
> very happy to hear that we have some time to get the code approved in
> the App Store.
>
> We love to complain when the process goes awry, but it's also
> important to remember to give credit when it's well and truly due.
> Thanks again!
>
> -ch
>
> On May 13, 12:12 pm, Matt Sanford  wrote:
>
> > Quick update …
>
> >      While looking at the code that reminded me of this error I see it  
> > had some bugs of its own. We seem to have a matter of weeks rather  
> > than days before this change. Mobile developers and other who deride  
> > our lack of early notice take heed … now is the time to go unsigned.
>
> > Thanks;
> >   – Matt Sanford / @mzsanford
> >       Twitter Dev
>
> > On May 13, 2009, at 10:49 AM, Cameron Kaiser wrote:
>
> > >> I see that the product manager/APIs position is still on your  
> > >> site... does
> > >> that mean the position is still open?  Does teasing the API team  
> > >> help as a
> > >> qualification?
>
> > > It may get you an interview, but only so they can surreptitiously  
> > > get a
> > > photo and send it to the nice Italian men with the kneecappers.
>
> > > --
> > >  
> > > personal:http://www.cameronkaiser.com/
> > >  --
> > >  Cameron Kaiser * Floodgap Systems *www.floodgap.com*ckai...@floodgap.com
> > > -- They told me I was gullible ... and I believed them.  
> > > ---


[twitter-dev] [statuses/update] - "From Source" doesnt work sometimes

2009-05-15 Thread Michael

Hey guys,

is anybody else having this problem? I have two apps registered at
Twitter. Both are working fine with OAuth. I can access (read/write)
data from Twitter.

While the first app shows “from source” correctly, the second app
doesnt.

Instead of “from source” it shows “from web”.

Maybe a Twitter bug? What do you think?

Best wishes,
Michael


[twitter-dev] Re: How do you store Twitter profiles in your database?

2009-05-15 Thread Arik Fraimovich

Patrick & weex - thank you both for your answers. I guess I will go
with #2. Thanks.

On May 15, 3:06 am, weex  wrote:
> Option two. That's what I do as well for Tweet Scan user search.
> Database fields should be atomic and I bet the profile schema doesn't
> change in a way that breaks your (properly coded) script for a long
> while.


[twitter-dev] Getting When A Friendship Began

2009-05-15 Thread Patrick Burrows

Is the created_at return value of http://twitter.com/statuses/friends.format
the date that person's Twitter account was created, or the date that
friendship began? 

My guess is it is when the twitter account was created. If that is so, then
is there a way to determine the date someone started following a person, or
being followed by a person?


--
Patrick Burrows
http://Categorical.ly (the Best Twitter Client Possible)
@Categorically




[twitter-dev] Re: How To: Remove Follower

2009-05-15 Thread Nick Arnett
On Thu, May 14, 2009 at 10:32 PM, TweetClean wrote:

>
> I see that there is the ability to remove people I am following using
> the friendships/destroy API call.  How do I remove someone who is
> following me?  I am sure it is right in front of my eyes but I am not
> making any connections.


I think that would be blocks/create.

Nick


[twitter-dev] Re: http://twitter.com/home?status=thisusedtowork

2009-05-15 Thread Susan at DC.org

The fix seems to work from a link on an insecure page, but not from a
secure page.  Any thoughts?

Thanks,
Sue

On Apr 30, 7:13 pm, John Adams  wrote:
> On Apr 30, 2009, at 4:00 PM, Matt Sanford wrote:
>
>
>
> > Hi there,
>
> >     We're working on getting that fix out right now. I was hoping we  
> > would get the fix pushed out and I could just re-cap after the fact :)
>
> > Thanks;
> >  – Matt Sanford / @mzsanford
> >      Twitter Dev
>
> > On Apr 30, 2009, at 2:51 PM, Dave Winer wrote:
>
> >> I happy to report that I have the new UI on my account and it's nice.
>
> >> However, apparently the "status" param is no longer recognized.
>
> >>http://twitter.com/home?status=thisusedtowork
>
> >> That would put "thisusedtowork" in the "What are you doing?" box.
>
> >> Now of course I'm probably reading this wrong, or missed  
> >> something. :-)
>
> >> Any help would be much appreciated...
>
> >> Dave
>
> ---
> John Adams
> Twitter Operations
> j...@twitter.comhttp://twitter.com/netik


[twitter-dev] Possible Bug in Twitter Search API

2009-05-15 Thread briantroy

I've noticed this before but always tried to deal with it as a bug on
my side. It is, however, now clear to me that from time to time
Twitter Search API seems to ignore the since_id.

We track FollowFriday by polling Twitter Search every so often (the
process is throttled from 10 seconds to 180 seconds depending on how
many results we get). This works great 90% of the time. But on high
volume days (Fridays) I've noticed we get a lot of multi-page
responses causing us to make far too many requests to the Twitter API
(900/hour).
When attempting to figure out why we are making so many requests I
uncovered something very interesting. When we get a "tweet" we store
it in our database. That database has a unique index on the customer
id/Tweet Id. When we get mulit-page responses from Twitter and iterate
through each page the VAST MAJORITY of the Tweets violate this unique
index. What does this mean? That we already have that tweet.
Today, I turned on some additional debugging and saw that the tweets
we were getting from Twitter Search were, in fact, prior to the
since_id we sent.

This is causing us to POUND the API servers unnecessarily. There is,
however, really nothing I can do about it on my end.

Here is a snip of the log showing the failed inserts and the ID we are
working with. The last line shows you both the old max id and the new
max id (after processing the tweets). As you can see every tweet
violates the unique constraint (27 is the customer id). You can also
see that we've called the API for this one search 1016 times this
hour... which is WAY, WAY too much (16.9 times per second):

NOTICE: 10:45:37 AM on Fri May 15th Tweet insert failed: Duplicate
entry '27-1806522797' for key 2
SQL: insert into justsignal.tweets(text, tw_id, to_user_id, to_user,
from_user_id, from_user, iso_language_code, profile_image_url,
created_at, bulk_svc_id) values('#followfriday edubloggers
@CoolCatTeacher @dwarlick @ewanmcintosh @willrich45 @larryferlazzo
@suewaters',1806522797, 0, '', 192010, 'WeAreTeachers', 'en', 'http://
s3.amazonaws.com/twitter_production/profile_images/52716611/
Picture_2_normal.png', 'Fri, 15 May 2009 14:41:51 +', 27)
NOTICE: 10:45:37 AM on Fri May 15th Tweet insert failed: Duplicate
entry '27-1806522766' for key 2
SQL: insert into justsignal.tweets(text, tw_id, to_user_id, to_user,
from_user_id, from_user, iso_language_code, profile_image_url,
created_at, bulk_svc_id) values('thx for the #followfriday
love, @brokesocialite & @silveroaklimo.  Also thx to @diamondemory
& @bmichelle for the RTs of FF',1806522766, 0, '', 1149953,
'lmdupont', 'en', 'http://s3.amazonaws.com/twitter_production/
profile_images/188591402/lisaann_normal.jpg', 'Fri, 15 May 2009
14:41:51 +', 27)
NOTICE: 10:45:37 AM on Fri May 15th Tweet insert failed: Duplicate
entry '27-1806522760' for key 2
SQL: insert into justsignal.tweets(text, tw_id, to_user_id, to_user,
from_user_id, from_user, iso_language_code, profile_image_url,
created_at, bulk_svc_id) values('Thx! RT @dpbkmb: #followfriday
@ifeelgod @americandream09 @DailyHappenings @MrMilestone @emgtay
@Nurul54 @mexiabill @naturallyknotty',1806522760, 0, '', 1303322,
'borgellaj', 'en', 'http://s3.amazonaws.com/twitter_production/
profile_images/58399480/img017_normal.jpg', 'Fri, 15 May 2009 14:41:51
+', 27)
NOTICE: 10:45:37 AM on Fri May 15th Tweet insert failed: Duplicate
entry '27-1806522759' for key 2
SQL: insert into justsignal.tweets(text, tw_id, to_user_id, to_user,
from_user_id, from_user, iso_language_code, profile_image_url,
created_at, bulk_svc_id) values('Morning my tweets!!! follow
friday! Dnt forget to RT me in need of followers LOL!',1806522759,
0, '', 11790458, 'Dae_Marie', 'en', 'http://s3.amazonaws.com/
twitter_production/profile_images/199283178/dae_bab_normal.jpg',
'Fri, 15 May 2009 14:41:50 +', 27)
NOTICE: 10:45:37 AM on Fri May 15th Tweet insert failed: Duplicate
entry '27-1806522752' for key 2
SQL: insert into justsignal.tweets(text, tw_id, to_user_id, to_user,
from_user_id, from_user, iso_language_code, profile_image_url,
created_at, bulk_svc_id) values('#ff #followfriday
@dirtyert (he\'s started with scrap metal stories) and @soufron if you
speak French',1806522752, 0, '', 1704, 'vagredajr', 'en', 'http://
s3.amazonaws.com/twitter_production/profile_images/155241633/
_agreda_normal.jpg', 'Fri, 15 May 2009 14:41:50 +', 27)
NOTICE: 10:45:37 AM on Fri May 15th Tweet insert failed: Duplicate
entry '27-1806522729' for key 2
SQL: insert into justsignal.tweets(text, tw_id, to_user_id, to_user,
from_user_id, from_user, iso_language_code, profile_image_url,
created_at, bulk_svc_id) values('#followfriday @hootsuite
@FitnessMagazine @packagingdiva @MobileLifeToday',1806522729, 0, '',
11893419, 'ServiceFoods', 'it', 'http://s3.amazonaws.com/
twitter_production/profile_images/141678280/SF-shrunken_normal.bmp',
'Fri, 15 May 2009 14:41:50 +', 27)
Updating number for api hits for hour: 10 to: 1016
DEBUG: 10:45:37 AM on Fri May 15th Checking fo

[twitter-dev] Re: Status ID closing in on maximum unsigned integer

2009-05-15 Thread Patrick Burrows

Hilarious!



--
Patrick Burrows
http://Categorical.ly (the Best Twitter Client Possible)
@Categorically


-Original Message-
From: twitter-development-talk@googlegroups.com
[mailto:twitter-development-t...@googlegroups.com] On Behalf Of Martin
Dufort
Sent: Friday, May 15, 2009 10:08 AM
To: Twitter Development Talk
Subject: [twitter-dev] Re: Status ID closing in on maximum unsigned integer


And you can check on the progress of that Status ID on:
www.twitpocalypse.com
Unless Twitter decides to blacklist us) for sending too many requests
to the public timeline API...

So Twitter, if you do so, please send me an email martin [insert at]
wherecloud [dot] com so we can work something out :-)

Martin - www.wherecloud.com


On May 13, 9:07 pm, Craig Hockenberry 
wrote:
> Let me be the first to say THANK YOU for this advance notice. I found
> and fixed some bugs in our iPhone client today because of it — and I'm
> very happy to hear that we have some time to get the code approved in
> the App Store.
>
> We love to complain when the process goes awry, but it's also
> important to remember to give credit when it's well and truly due.
> Thanks again!
>
> -ch
>
> On May 13, 12:12 pm, Matt Sanford  wrote:
>
> > Quick update …
>
> >      While looking at the code that reminded me of this error I see it  
> > had some bugs of its own. We seem to have a matter of weeks rather  
> > than days before this change. Mobile developers and other who deride  
> > our lack of early notice take heed … now is the time to go unsigned.
>
> > Thanks;
> >   – Matt Sanford / @mzsanford
> >       Twitter Dev
>
> > On May 13, 2009, at 10:49 AM, Cameron Kaiser wrote:
>
> > >> I see that the product manager/APIs position is still on your  
> > >> site... does
> > >> that mean the position is still open?  Does teasing the API team  
> > >> help as a
> > >> qualification?
>
> > > It may get you an interview, but only so they can surreptitiously  
> > > get a
> > > photo and send it to the nice Italian men with the kneecappers.
>
> > > --
> > > 
personal:http://www.cameronkaiser.com/
> > >  --
> > >  Cameron Kaiser * Floodgap Systems
*www.floodgap.com*ckai...@floodgap.com
> > > -- They told me I was gullible ... and I believed them.  
> > > ---



[twitter-dev] Re: Possible Bug in Twitter Search API

2009-05-15 Thread Matt Sanford


Hi Brian,

My guess is that this is the same since_id/max_id pagination  
confusion we have always had. If you look at the next_page URL in our  
API you'll notice that it does not contain the since_id. If you are  
searching with since_id and requesting multiple pages you need to  
manually stop pagination once you find an id lower than your original  
since_id. I know this is a pain but there is a large performance gain  
in it on our back end. There was an update a few weeks ago [1] where I  
talked about this and a warning message (twitter:warning in atom and  
"warning" in JSON) was added to alert you to the fact it had been  
removed. Does that sound like the cause of your issue?


Thanks;
 – Matt Sanford / @mzsanford
 Twitter Dev

[1] - 
http://groups.google.com/group/twitter-development-talk/browse_frm/thread/6e80cb6eec3a16d3?tvc=1

On May 15, 2009, at 7:50 AM, briantroy wrote:



I've noticed this before but always tried to deal with it as a bug on
my side. It is, however, now clear to me that from time to time
Twitter Search API seems to ignore the since_id.

We track FollowFriday by polling Twitter Search every so often (the
process is throttled from 10 seconds to 180 seconds depending on how
many results we get). This works great 90% of the time. But on high
volume days (Fridays) I've noticed we get a lot of multi-page
responses causing us to make far too many requests to the Twitter API
(900/hour).
When attempting to figure out why we are making so many requests I
uncovered something very interesting. When we get a "tweet" we store
it in our database. That database has a unique index on the customer
id/Tweet Id. When we get mulit-page responses from Twitter and iterate
through each page the VAST MAJORITY of the Tweets violate this unique
index. What does this mean? That we already have that tweet.
Today, I turned on some additional debugging and saw that the tweets
we were getting from Twitter Search were, in fact, prior to the
since_id we sent.

This is causing us to POUND the API servers unnecessarily. There is,
however, really nothing I can do about it on my end.

Here is a snip of the log showing the failed inserts and the ID we are
working with. The last line shows you both the old max id and the new
max id (after processing the tweets). As you can see every tweet
violates the unique constraint (27 is the customer id). You can also
see that we've called the API for this one search 1016 times this
hour... which is WAY, WAY too much (16.9 times per second):

NOTICE: 10:45:37 AM on Fri May 15th Tweet insert failed: Duplicate
entry '27-1806522797' for key 2
SQL: insert into justsignal.tweets(text, tw_id, to_user_id, to_user,
from_user_id, from_user, iso_language_code, profile_image_url,
created_at, bulk_svc_id) values('#followfriday edubloggers
@CoolCatTeacher @dwarlick @ewanmcintosh @willrich45 @larryferlazzo
@suewaters',1806522797, 0, '', 192010, 'WeAreTeachers', 'en', 'http://
s3.amazonaws.com/twitter_production/profile_images/52716611/
Picture_2_normal.png', 'Fri, 15 May 2009 14:41:51 +', 27)
NOTICE: 10:45:37 AM on Fri May 15th Tweet insert failed: Duplicate
entry '27-1806522766' for key 2
SQL: insert into justsignal.tweets(text, tw_id, to_user_id, to_user,
from_user_id, from_user, iso_language_code, profile_image_url,
created_at, bulk_svc_id) values('thx for the #followfriday
love, @brokesocialite & @silveroaklimo.  Also thx to @diamondemory
& @bmichelle for the RTs of FF',1806522766, 0, '', 1149953,
'lmdupont', 'en', 'http://s3.amazonaws.com/twitter_production/
profile_images/188591402/lisaann_normal.jpg', 'Fri, 15 May 2009
14:41:51 +', 27)
NOTICE: 10:45:37 AM on Fri May 15th Tweet insert failed: Duplicate
entry '27-1806522760' for key 2
SQL: insert into justsignal.tweets(text, tw_id, to_user_id, to_user,
from_user_id, from_user, iso_language_code, profile_image_url,
created_at, bulk_svc_id) values('Thx! RT @dpbkmb: #followfriday
@ifeelgod @americandream09 @DailyHappenings @MrMilestone @emgtay
@Nurul54 @mexiabill @naturallyknotty',1806522760, 0, '', 1303322,
'borgellaj', 'en', 'http://s3.amazonaws.com/twitter_production/
profile_images/58399480/img017_normal.jpg', 'Fri, 15 May 2009 14:41:51
+', 27)
NOTICE: 10:45:37 AM on Fri May 15th Tweet insert failed: Duplicate
entry '27-1806522759' for key 2
SQL: insert into justsignal.tweets(text, tw_id, to_user_id, to_user,
from_user_id, from_user, iso_language_code, profile_image_url,
created_at, bulk_svc_id) values('Morning my tweets!!! follow
friday! Dnt forget to RT me in need of followers LOL!',1806522759,
0, '', 11790458, 'Dae_Marie', 'en', 'http://s3.amazonaws.com/
twitter_production/profile_images/199283178/dae_bab_normal.jpg',
'Fri, 15 May 2009 14:41:50 +', 27)
NOTICE: 10:45:37 AM on Fri May 15th Tweet insert failed: Duplicate
entry '27-1806522752' for key 2
SQL: insert into justsignal.tweets(text, tw_id, to_user_id, to_user,
from_user_id, from_user, iso_language_code, profile_image_url,
created_at, bulk_svc_

[twitter-dev] Re: http://twitter.com/home?status=thisusedtowork

2009-05-15 Thread Matt Sanford


Hi there,

A bug was re-introduced with the ?status parameter. I noticed it  
yesterday on the replies link on search.twitter.com and we've got a  
fix ready to go out with our next deploy. Sorry for the inconvenience.


Thanks;
 – Matt Sanford / @mzsanford
 Twitter Dev

On May 15, 2009, at 7:30 AM, Susan at DC.org wrote:



The fix seems to work from a link on an insecure page, but not from a
secure page.  Any thoughts?

Thanks,
Sue

On Apr 30, 7:13 pm, John Adams  wrote:

On Apr 30, 2009, at 4:00 PM, Matt Sanford wrote:




Hi there,



We're working on getting that fix out right now. I was hoping we
would get the fix pushed out and I could just re-cap after the  
fact :)



Thanks;
 – Matt Sanford / @mzsanford
 Twitter Dev



On Apr 30, 2009, at 2:51 PM, Dave Winer wrote:


I happy to report that I have the new UI on my account and it's  
nice.



However, apparently the "status" param is no longer recognized.



http://twitter.com/home?status=thisusedtowork



That would put "thisusedtowork" in the "What are you doing?" box.



Now of course I'm probably reading this wrong, or missed
something. :-)



Any help would be much appreciated...



Dave


---
John Adams
Twitter Operations
j...@twitter.comhttp://twitter.com/netik




[twitter-dev] How to count to 140: or, characters vs. bytes vs. entities, third strike

2009-05-15 Thread leoboiko

So.  Much twitter documentation talks about “140 characters” and “160
characters limit”.  But “character” is not a raw data type, it’s a
string type.  It has been observed[1][2][3][4] that 1) twitter expects
these characters to be encoded as UTF-8 (or ASCII, which is a strict
subset of UTF-8), and 2) the limit really is 140/160 *bytes*, not
characters (UTF-8 characters can use up to 4 bytes each; two-bytes per
character are common for European languages and three-to-four bytes
are common for Asian scripts, Indic &c.).

Later I intend to thoroughly replace “characters” by “bytes” (or
“UTF-8 byte count”, &c.) in the API wiki.  Hope it’s ok with everyone.

* * *

Many twitter applications want to interactively count characters as
users type.  Other than the byte/character confusion, there’s another
common source of errors: the fact that ‘<’ and ‘>’ are converted to,
and counted as, their respective HTML entities (< and > —four
bytes each)[1]).  That in itself isn’t so bad, as long as it’s
deterministic and documented.  It seems the conversion may take place
a few hours after the update is sent[4], which is unfortunate but
still acceptable.  Much worse is the problem that, at least according
to the FAQ, other (unspecified) characters “may” be converted to (and
counted as) HTML entities[1].  That makes a twitter character-counting
function either a potential truncation trap (if it ignores HTML
entities), or exceptionally conservative (if it assumes ALL possible
characters will be HTML-entitied).  Is this still the current
behavior? If so, I’m filing a but =)

* * *

I’d like to understand fully what are the motivations for these limits
and counting algorithms.  Alex Payne stated that as of now they’re
just using Ruby 1.8 String.count, which is equivalent to UTF-8 byte
count.  However, AFAIK, the 140-bytes limit was originally intended to
support sending updates as SMS messages.  Now, I have no SMS
experience at all, and it's true that SMS has a hard limit of 140
bytes, but AFAIK SMS text MUST be encoded in one of a few specific
encodings:

[…]the default GSM 7-bit alphabet [i.e. character encoding], the 8-
bit data alphabet, and the 16-bit UTF-16/UCS-2 alphabet. Depending on
which alphabet the subscriber has configured in the handset, this
leads to the maximum individual Short Message sizes of 160 7-bit
characters, 140 8-bit characters, or 70 16-bit characters (including
spaces). Support of the GSM 7-bit alphabet is mandatory for GSM
handsets and network elements, but characters in languages such as
Arabic, Chinese, Korean, Japanese or Cyrillic alphabet languages (e.g.
Russian) must be encoded using the 16-bit UCS-2 character encoding.

Notice the absence of UTF-8 .

That means Twitter’s “140 bytes” does not match SMS “140 bytes” at
all.  A 140-byte UTF-8 Twitter update will take less than 140 bytes in
GSM 7-bit, so if you’re sending SMS as GSM you’re being too
pessimistic.  And the same Twitter 140-byte string can take far more
bytes in UCS-2, so if you’re sending Unicode SMS you’re being too
optimistic.  (Notice the GSM encoding supports very few characters;
it’s not possible to convert an Twitter update like “reação em japonês
é 反応” to GSM SMS, neither 7– nor 8-bit, so for those you’re stuck with
UCS-2 SMS).

Twitter doesn’t send SMS to my country so I have no way to test how
you deal with this.  I suppose you take the most space-efficient
encoding that supports all characters in the message, and if it
results in more than 140 bytes, truncate the message.  It would be
nice to have documentation on exactly what happens (if you already do,
hit me in the head for not finding it).  In any case this seems to
complicate the “we sent your friends the short version” message.
Currently, that message means “your update, as UTF-8, is in the 141–
160 bytes range”, right? But that count means nothing to SMS — a
message with 160 UTF-8 bytes might wholly fit an SMS just fine (if the
characters are all included in GSM 7-bit), while one with 71 UTF-8
bytes might not (if they’re all non-GSM, say, ‘ç’ repeated 71 times).
I think instead the SMS-conversion function should propagate all the
way back its SMS-truncation exception, and the warning should be
phrased as “your update didn’t fit an SMS, so we sent the short
version” .  Even better, only send the warning when at least one of
the subscribers is actually receiving it as SMS.

* * *

This discussion brings into mind what’s the point of limiting updates
to 140/160 bytes in the first place.  If the intention was to support
SMS, UTF-8 byte count doesn’t have any meaningful relationship with
it.  It does work for the English world, since 160 ASCII characters
will convert to 140 GSM bytes (in 7-bit characters).  But even in this
degenerate case, the 140 part feels weird (since it’s safe to just use
160, and they won’t be truncated).  And as soon as you put a “ç” or a
“日” or a “—” in there, Twitter’s byte-count limit loses all meaning.

Twitter now has an existence that’s i

[twitter-dev] Re: Getting When A Friendship Began

2009-05-15 Thread Doug Williams
That value is the date that the account was created as you determined. There
is no way to determine programmatically when the account was followed. That
is not something we expose in the API.
Thanks,
Doug
--

Doug Williams
Twitter Platform Support
http://twitter.com/dougw




On Fri, May 15, 2009 at 7:05 AM, Patrick Burrows wrote:

>
> Is the created_at return value of
> http://twitter.com/statuses/friends.format
> the date that person's Twitter account was created, or the date that
> friendship began?
>
> My guess is it is when the twitter account was created. If that is so, then
> is there a way to determine the date someone started following a person, or
> being followed by a person?
>
>
> --
> Patrick Burrows
> http://Categorical.ly (the Best Twitter Client Possible)
> @Categorically
>
>
>


[twitter-dev] Re: http://twitter.com/home?status=thisusedtowork

2009-05-15 Thread Steve Brunton

On Fri, May 15, 2009 at 12:07 PM, Matt Sanford  wrote:
>
> Hi there,
>
>    A bug was re-introduced with the ?status parameter. I noticed it
> yesterday on the replies link on search.twitter.com and we've got a fix
> ready to go out with our next deploy. Sorry for the inconvenience.
>

I'm going to assume this has been deployed? The CNN and iReport Tweet
This stuff is working properly. Just want to make sure I shouldn't
give the business a heads up that it might be broke if it's not.

-steve


[twitter-dev] Re: Possible Bug in Twitter Search API

2009-05-15 Thread briantroy

Matt - I'll verify that is the issue (I assume I should have new
results on page one AND page 2 - otherwise there is something else
going on).

Brian

On May 15, 8:33 am, Matt Sanford  wrote:
> Hi Brian,
>
>      My guess is that this is the same since_id/max_id pagination  
> confusion we have always had. If you look at the next_page URL in our  
> API you'll notice that it does not contain the since_id. If you are  
> searching with since_id and requesting multiple pages you need to  
> manually stop pagination once you find an id lower than your original  
> since_id. I know this is a pain but there is a large performance gain  
> in it on our back end. There was an update a few weeks ago [1] where I  
> talked about this and a warning message (twitter:warning in atom and  
> "warning" in JSON) was added to alert you to the fact it had been  
> removed. Does that sound like the cause of your issue?
>
> Thanks;
>   – Matt Sanford / @mzsanford
>       Twitter Dev
>
> [1] -http://groups.google.com/group/twitter-development-talk/browse_frm/th...
>
> On May 15, 2009, at 7:50 AM, briantroy wrote:
>
>
>
> > I've noticed this before but always tried to deal with it as a bug on
> > my side. It is, however, now clear to me that from time to time
> > Twitter Search API seems to ignore the since_id.
>
> > We track FollowFriday by polling Twitter Search every so often (the
> > process is throttled from 10 seconds to 180 seconds depending on how
> > many results we get). This works great 90% of the time. But on high
> > volume days (Fridays) I've noticed we get a lot of multi-page
> > responses causing us to make far too many requests to the Twitter API
> > (900/hour).
> > When attempting to figure out why we are making so many requests I
> > uncovered something very interesting. When we get a "tweet" we store
> > it in our database. That database has a unique index on the customer
> > id/Tweet Id. When we get mulit-page responses from Twitter and iterate
> > through each page the VAST MAJORITY of the Tweets violate this unique
> > index. What does this mean? That we already have that tweet.
> > Today, I turned on some additional debugging and saw that the tweets
> > we were getting from Twitter Search were, in fact, prior to the
> > since_id we sent.
>
> > This is causing us to POUND the API servers unnecessarily. There is,
> > however, really nothing I can do about it on my end.
>
> > Here is a snip of the log showing the failed inserts and the ID we are
> > working with. The last line shows you both the old max id and the new
> > max id (after processing the tweets). As you can see every tweet
> > violates the unique constraint (27 is the customer id). You can also
> > see that we've called the API for this one search 1016 times this
> > hour... which is WAY, WAY too much (16.9 times per second):
>
> > NOTICE: 10:45:37 AM on Fri May 15th Tweet insert failed: Duplicate
> > entry '27-1806522797' for key 2
> > SQL: insert into justsignal.tweets(text, tw_id, to_user_id, to_user,
> > from_user_id, from_user, iso_language_code, profile_image_url,
> > created_at, bulk_svc_id) values('#followfriday edubloggers
> > @CoolCatTeacher @dwarlick @ewanmcintosh @willrich45 @larryferlazzo
> > @suewaters',1806522797, 0, '', 192010, 'WeAreTeachers', 'en', 'http://
> > s3.amazonaws.com/twitter_production/profile_images/52716611/
> > Picture_2_normal.png', 'Fri, 15 May 2009 14:41:51 +', 27)
> > NOTICE: 10:45:37 AM on Fri May 15th Tweet insert failed: Duplicate
> > entry '27-1806522766' for key 2
> > SQL: insert into justsignal.tweets(text, tw_id, to_user_id, to_user,
> > from_user_id, from_user, iso_language_code, profile_image_url,
> > created_at, bulk_svc_id) values('thx for the #followfriday
> > love, @brokesocialite & @silveroaklimo.  Also thx to @diamondemory
> > & @bmichelle for the RTs of FF',1806522766, 0, '', 1149953,
> > 'lmdupont', 'en', 'http://s3.amazonaws.com/twitter_production/
> > profile_images/188591402/lisaann_normal.jpg', 'Fri, 15 May 2009
> > 14:41:51 +', 27)
> > NOTICE: 10:45:37 AM on Fri May 15th Tweet insert failed: Duplicate
> > entry '27-1806522760' for key 2
> > SQL: insert into justsignal.tweets(text, tw_id, to_user_id, to_user,
> > from_user_id, from_user, iso_language_code, profile_image_url,
> > created_at, bulk_svc_id) values('Thx! RT @dpbkmb: #followfriday
> > @ifeelgod @americandream09 @DailyHappenings @MrMilestone @emgtay
> > @Nurul54 @mexiabill @naturallyknotty',1806522760, 0, '', 1303322,
> > 'borgellaj', 'en', 'http://s3.amazonaws.com/twitter_production/
> > profile_images/58399480/img017_normal.jpg', 'Fri, 15 May 2009 14:41:51
> > +', 27)
> > NOTICE: 10:45:37 AM on Fri May 15th Tweet insert failed: Duplicate
> > entry '27-1806522759' for key 2
> > SQL: insert into justsignal.tweets(text, tw_id, to_user_id, to_user,
> > from_user_id, from_user, iso_language_code, profile_image_url,
> > created_at, bulk_svc_id) values('Morning my tweets!!! follow
> > friday! Dnt forget to RT me in need

[twitter-dev] Re: Possible Bug in Twitter Search API

2009-05-15 Thread briantroy

Matt -

That took care of it... minor change on my side with big resource
savings. Where was the original announcement made that this had
changed (wondering how I missed it).

Thanks!

Brian

On May 15, 8:33 am, Matt Sanford  wrote:
> Hi Brian,
>
>      My guess is that this is the same since_id/max_id pagination  
> confusion we have always had. If you look at the next_page URL in our  
> API you'll notice that it does not contain the since_id. If you are  
> searching with since_id and requesting multiple pages you need to  
> manually stop pagination once you find an id lower than your original  
> since_id. I know this is a pain but there is a large performance gain  
> in it on our back end. There was an update a few weeks ago [1] where I  
> talked about this and a warning message (twitter:warning in atom and  
> "warning" in JSON) was added to alert you to the fact it had been  
> removed. Does that sound like the cause of your issue?
>
> Thanks;
>   – Matt Sanford / @mzsanford
>       Twitter Dev
>
> [1] -http://groups.google.com/group/twitter-development-talk/browse_frm/th...
>
> On May 15, 2009, at 7:50 AM, briantroy wrote:
>
>
>
> > I've noticed this before but always tried to deal with it as a bug on
> > my side. It is, however, now clear to me that from time to time
> > Twitter Search API seems to ignore the since_id.
>
> > We track FollowFriday by polling Twitter Search every so often (the
> > process is throttled from 10 seconds to 180 seconds depending on how
> > many results we get). This works great 90% of the time. But on high
> > volume days (Fridays) I've noticed we get a lot of multi-page
> > responses causing us to make far too many requests to the Twitter API
> > (900/hour).
> > When attempting to figure out why we are making so many requests I
> > uncovered something very interesting. When we get a "tweet" we store
> > it in our database. That database has a unique index on the customer
> > id/Tweet Id. When we get mulit-page responses from Twitter and iterate
> > through each page the VAST MAJORITY of the Tweets violate this unique
> > index. What does this mean? That we already have that tweet.
> > Today, I turned on some additional debugging and saw that the tweets
> > we were getting from Twitter Search were, in fact, prior to the
> > since_id we sent.
>
> > This is causing us to POUND the API servers unnecessarily. There is,
> > however, really nothing I can do about it on my end.
>
> > Here is a snip of the log showing the failed inserts and the ID we are
> > working with. The last line shows you both the old max id and the new
> > max id (after processing the tweets). As you can see every tweet
> > violates the unique constraint (27 is the customer id). You can also
> > see that we've called the API for this one search 1016 times this
> > hour... which is WAY, WAY too much (16.9 times per second):
>
> > NOTICE: 10:45:37 AM on Fri May 15th Tweet insert failed: Duplicate
> > entry '27-1806522797' for key 2
> > SQL: insert into justsignal.tweets(text, tw_id, to_user_id, to_user,
> > from_user_id, from_user, iso_language_code, profile_image_url,
> > created_at, bulk_svc_id) values('#followfriday edubloggers
> > @CoolCatTeacher @dwarlick @ewanmcintosh @willrich45 @larryferlazzo
> > @suewaters',1806522797, 0, '', 192010, 'WeAreTeachers', 'en', 'http://
> > s3.amazonaws.com/twitter_production/profile_images/52716611/
> > Picture_2_normal.png', 'Fri, 15 May 2009 14:41:51 +', 27)
> > NOTICE: 10:45:37 AM on Fri May 15th Tweet insert failed: Duplicate
> > entry '27-1806522766' for key 2
> > SQL: insert into justsignal.tweets(text, tw_id, to_user_id, to_user,
> > from_user_id, from_user, iso_language_code, profile_image_url,
> > created_at, bulk_svc_id) values('thx for the #followfriday
> > love, @brokesocialite & @silveroaklimo.  Also thx to @diamondemory
> > & @bmichelle for the RTs of FF',1806522766, 0, '', 1149953,
> > 'lmdupont', 'en', 'http://s3.amazonaws.com/twitter_production/
> > profile_images/188591402/lisaann_normal.jpg', 'Fri, 15 May 2009
> > 14:41:51 +', 27)
> > NOTICE: 10:45:37 AM on Fri May 15th Tweet insert failed: Duplicate
> > entry '27-1806522760' for key 2
> > SQL: insert into justsignal.tweets(text, tw_id, to_user_id, to_user,
> > from_user_id, from_user, iso_language_code, profile_image_url,
> > created_at, bulk_svc_id) values('Thx! RT @dpbkmb: #followfriday
> > @ifeelgod @americandream09 @DailyHappenings @MrMilestone @emgtay
> > @Nurul54 @mexiabill @naturallyknotty',1806522760, 0, '', 1303322,
> > 'borgellaj', 'en', 'http://s3.amazonaws.com/twitter_production/
> > profile_images/58399480/img017_normal.jpg', 'Fri, 15 May 2009 14:41:51
> > +', 27)
> > NOTICE: 10:45:37 AM on Fri May 15th Tweet insert failed: Duplicate
> > entry '27-1806522759' for key 2
> > SQL: insert into justsignal.tweets(text, tw_id, to_user_id, to_user,
> > from_user_id, from_user, iso_language_code, profile_image_url,
> > created_at, bulk_svc_id) values('Morning my tweets!!! follo

[twitter-dev] Re: Possible Bug in Twitter Search API

2009-05-15 Thread Matt Sanford


Hi Brian,

This has always been the case, that thread I linked to earlier is  
where I made it more explicit. It was always there but it wasn't  
documented properly. The documentation was updated as well to try and  
help in the future.


Thanks;
 – Matt Sanford / @mzsanford
 Twitter Dev

On May 15, 2009, at 9:14 AM, briantroy wrote:



Matt -

That took care of it... minor change on my side with big resource
savings. Where was the original announcement made that this had
changed (wondering how I missed it).

Thanks!

Brian

On May 15, 8:33 am, Matt Sanford  wrote:

Hi Brian,

 My guess is that this is the same since_id/max_id pagination
confusion we have always had. If you look at the next_page URL in our
API you'll notice that it does not contain the since_id. If you are
searching with since_id and requesting multiple pages you need to
manually stop pagination once you find an id lower than your original
since_id. I know this is a pain but there is a large performance gain
in it on our back end. There was an update a few weeks ago [1]  
where I

talked about this and a warning message (twitter:warning in atom and
"warning" in JSON) was added to alert you to the fact it had been
removed. Does that sound like the cause of your issue?

Thanks;
  – Matt Sanford / @mzsanford
  Twitter Dev

[1] -http://groups.google.com/group/twitter-development-talk/browse_frm/th 
...


On May 15, 2009, at 7:50 AM, briantroy wrote:



I've noticed this before but always tried to deal with it as a bug  
on

my side. It is, however, now clear to me that from time to time
Twitter Search API seems to ignore the since_id.



We track FollowFriday by polling Twitter Search every so often (the
process is throttled from 10 seconds to 180 seconds depending on how
many results we get). This works great 90% of the time. But on high
volume days (Fridays) I've noticed we get a lot of multi-page
responses causing us to make far too many requests to the Twitter  
API

(900/hour).
When attempting to figure out why we are making so many requests I
uncovered something very interesting. When we get a "tweet" we store
it in our database. That database has a unique index on the customer
id/Tweet Id. When we get mulit-page responses from Twitter and  
iterate
through each page the VAST MAJORITY of the Tweets violate this  
unique

index. What does this mean? That we already have that tweet.
Today, I turned on some additional debugging and saw that the tweets
we were getting from Twitter Search were, in fact, prior to the
since_id we sent.



This is causing us to POUND the API servers unnecessarily. There is,
however, really nothing I can do about it on my end.


Here is a snip of the log showing the failed inserts and the ID we  
are
working with. The last line shows you both the old max id and the  
new

max id (after processing the tweets). As you can see every tweet
violates the unique constraint (27 is the customer id). You can also
see that we've called the API for this one search 1016 times this
hour... which is WAY, WAY too much (16.9 times per second):



NOTICE: 10:45:37 AM on Fri May 15th Tweet insert failed: Duplicate
entry '27-1806522797' for key 2
SQL: insert into justsignal.tweets(text, tw_id, to_user_id, to_user,
from_user_id, from_user, iso_language_code, profile_image_url,
created_at, bulk_svc_id) values('#followfriday edubloggers
@CoolCatTeacher @dwarlick @ewanmcintosh @willrich45 @larryferlazzo
@suewaters',1806522797, 0, '', 192010, 'WeAreTeachers', 'en',  
'http://

s3.amazonaws.com/twitter_production/profile_images/52716611/
Picture_2_normal.png', 'Fri, 15 May 2009 14:41:51 +', 27)
NOTICE: 10:45:37 AM on Fri May 15th Tweet insert failed: Duplicate
entry '27-1806522766' for key 2
SQL: insert into justsignal.tweets(text, tw_id, to_user_id, to_user,
from_user_id, from_user, iso_language_code, profile_image_url,
created_at, bulk_svc_id) values('thx for the #followfriday
love, @brokesocialite & @silveroaklimo.  Also thx to  
@diamondemory

& @bmichelle for the RTs of FF',1806522766, 0, '', 1149953,
'lmdupont', 'en', 'http://s3.amazonaws.com/twitter_production/
profile_images/188591402/lisaann_normal.jpg', 'Fri, 15 May 2009
14:41:51 +', 27)
NOTICE: 10:45:37 AM on Fri May 15th Tweet insert failed: Duplicate
entry '27-1806522760' for key 2
SQL: insert into justsignal.tweets(text, tw_id, to_user_id, to_user,
from_user_id, from_user, iso_language_code, profile_image_url,
created_at, bulk_svc_id) values('Thx! RT @dpbkmb:  
#followfriday

@ifeelgod @americandream09 @DailyHappenings @MrMilestone @emgtay
@Nurul54 @mexiabill @naturallyknotty',1806522760, 0, '', 1303322,
'borgellaj', 'en', 'http://s3.amazonaws.com/twitter_production/
profile_images/58399480/img017_normal.jpg', 'Fri, 15 May 2009  
14:41:51

+', 27)
NOTICE: 10:45:37 AM on Fri May 15th Tweet insert failed: Duplicate
entry '27-1806522759' for key 2
SQL: insert into justsignal.tweets(text, tw_id, to_user_id, to_user,
from_user_id, from_user, iso_lan

[twitter-dev] Re: Status ID closing in on maximum unsigned integer

2009-05-15 Thread mwm

I'm using a bigint (20) with MySQL for TwitteReader so
18446744073709551615 available. Still many IDs to go ;)


[twitter-dev] Re: How to count to 140: or, characters vs. bytes vs. entities, third strike

2009-05-15 Thread leoboiko

> On May 15, 2:03 pm, leoboiko  wrote:
> while one with 71 UTF-8
> bytes might not (if they’re all non-GSM, say, ‘ç’ repeated 71 times).

Sorry, that was a bad example: 71 ‘ç’s take up 142 bytes in UTF-8, not
71.

Consider instead 71 ‘^’ (or ‘\’, ‘[’ &c.).  These take one byte in
UTF-8, but their shortest encoding in SMS is two-byte (in GSM).  So
the 71-byte UTF-8 string would take more than 140 bytes as SMS and not
fit an SMS.

Why that matters? Consider a twitter update like this:

@d00d: in the console, type "cat ~/file.sql | tr [:upper:]
[:lower:] | less".  then you cand read the sql commands without the
annoying caps

That looks like a perfectly reasonable 140-character UTF-8 string, so
Twitter won't truncate it or warn about sending a short version.  But
its SMS encoding would take some 147 bytes, so the last words would be
truncated.

--
Leonardo Boiko
http://namakajiri.net


[twitter-dev] Re: How to count to 140: or, characters vs. bytes vs. entities, third strike

2009-05-15 Thread Eric Martin

I'd be interested to see a document that details the standards for
this as well.

On May 15, 12:01 pm, leoboiko  wrote:
> > On May 15, 2:03 pm, leoboiko  wrote:
> > while one with 71 UTF-8
> > bytes might not (if they’re all non-GSM, say, ‘ç’ repeated 71 times).
>
> Sorry, that was a bad example: 71 ‘ç’s take up 142 bytes in UTF-8, not
> 71.
>
> Consider instead 71 ‘^’ (or ‘\’, ‘[’ &c.).  These take one byte in
> UTF-8, but their shortest encoding in SMS is two-byte (in GSM).  So
> the 71-byte UTF-8 string would take more than 140 bytes as SMS and not
> fit an SMS.
>
> Why that matters? Consider a twitter update like this:
>
>     @d00d: in the console, type "cat ~/file.sql | tr [:upper:]
> [:lower:] | less".  then you cand read the sql commands without the
> annoying caps
>
> That looks like a perfectly reasonable 140-character UTF-8 string, so
> Twitter won't truncate it or warn about sending a short version.  But
> its SMS encoding would take some 147 bytes, so the last words would be
> truncated.
>
> --
> Leonardo Boikohttp://namakajiri.net


[twitter-dev] oAuth working PHP5 Minimal code

2009-05-15 Thread BenjaminHill

Did a "bottom up" build of a lightweight PHP5 oAuth class for some
read-only Twitter work.  Downloadable source if anyone is scratching
their head reading some of the more complete classes that are
available - I only included the minimum necessary to get the user
history displayed.

http://blog.benjaminhill.info/archives/67


[twitter-dev] update_profile_background_image API doesn't work

2009-05-15 Thread Voituk Vadim

Hi I`m getting strange errors on using update_profile_background_image
API call (see curl dump below)

The uploaded image size is 10Kb.
Also i`ve tried to use
  mqpro_glowdotsGray.br.jpg;type=image/jpeg
and
mqpro_glowdotsGray.br.jpg;type=image/jpg
notations and go the same result.

Using the same code for   update_profile_image call works perfectly.

What i`m doing wrong? Or is there any active issues related to this
API?

curl -v -F 'image=@/path/to/image/twallpapers/backgrounds/thumbs/
mqpro_glowdotsGray.br.jpg' --header 'Expect:' -u mylogin:mypassword
http://twitter.com/account/update_profile_background_image.xml


[twitter-dev] update_profile_background_image API doesn't work

2009-05-15 Thread Voituk Vadim

Hi
I`m getting strange errors on using update_profile_background_image
API call (see curl dump below)

The uploaded image size is 10Kb.
Also i`ve tried to use
  mqpro_glowdotsGray.br.jpg;type=image/jpeg
and
mqpro_glowdotsGray.br.jpg;type=image/jpg
notations and go the same result.

Using the same code for   update_profile_image call works perfectly.

What i`m doing wrong? Or is there any active issues related to this
API?

curl -v -F 'image=@/path/to/image/twallpapers/backgrounds/thumbs/
mqpro_glowdotsGray.br.jpg' --header 'Expect:' -u mylogin:mypassword
http://twitter.com/account/update_profile_background_image.xml
* About to connect() to twitter.com port 80
*   Trying 128.121.146.100... connected
* Connected to twitter.com (128.121.146.100) port 80
* Server auth using Basic with user 'mylogin'
> POST /account/update_profile_background_image.xml HTTP/1.1
> Authorization: Basic [hidden]
> User-Agent: curl/7.15.5 (i686-redhat-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8b 
> zlib/1.2.3 libidn/0.6.5
> Host: twitter.com
> Accept: */*
> Content-Length: 10662
> Content-Type: multipart/form-data; 
> boundary=5a35eee9279a
>
< HTTP/1.1 403 Forbidden
< Date: Fri, 15 May 2009 08:32:17 GMT
< Server: hi
< Last-Modified: Fri, 15 May 2009 08:32:17 GMT
< Status: 403 Forbidden
< Pragma: no-cache
< Cache-Control: no-cache, no-store, must-revalidate, pre-check=0,
post-check=0
< Content-Type: application/xml; charset=utf-8
< Content-Length: 203
< Expires: Tue, 31 Mar 1981 05:00:00 GMT
< X-Revision: e8cdf86372da838f4cb112e826cddcd374bcee16
< X-Transaction: 1242376337-96911-6692
< Set-Cookie: lang=; path=/
< Set-Cookie:
_twitter_sess=BAh7CToJdXNlcmkE7PAFAToTcGFzc3dvcmRfdG9rZW4iLTFjZGViYWQ2Yzhm
%250ANTBhZTE5ODExNWJjMmJhNTUxNTFiZmI1NDIxNjQ6B2lkIiVhOTlhMWViNDU4%250AMTdmNzJlMjdmNzY2MTllN2JhZGRmYSIKZmxhc2hJQzonQWN0aW9uQ29udHJv
%250AbGxlcjo6Rmxhc2g6OkZsYXNoSGFzaHsABjoKQHVzZWR7AA%253D
%253D--13dd462896dfa3b13448f4535612802942248a82; domain=.twitter.com;
path=/
< Vary: Accept-Encoding
< Connection: close


  /account/update_profile_background_image.xml
  There was a problem with your background image. Probably too
big.

* Closing connection #0


[twitter-dev] Re: http://twitter.com/home?status=thisusedtowork

2009-05-15 Thread Susan at DC.org

I just tested a ?status link from a secure page, and it is indeed
working again for me.  Thanks so much!

Sue

On May 15, 1:47 pm, Steve Brunton  wrote:
> On Fri, May 15, 2009 at 12:07 PM, Matt Sanford  wrote:
>
> > Hi there,
>
> >    A bug was re-introduced with the ?status parameter. I noticed it
> > yesterday on the replies link on search.twitter.com and we've got a fix
> > ready to go out with our next deploy. Sorry for the inconvenience.
>
> I'm going to assume this has been deployed? The CNN and iReport Tweet
> This stuff is working properly. Just want to make sure I shouldn't
> give the business a heads up that it might be broke if it's not.
>
> -steve


[twitter-dev] Re: OAuth and Perl

2009-05-15 Thread ben

I'm having the same problem as Jesse using the Net::OAuth

Here's what I get back from twitter:

$VAR1 = bless( {
 '_protocol' => 'HTTP/1.1',
 '_content' => 'Failed to validate oauth signature or
token',
 '_rc' => '401',
 '_headers' => bless( {
'connection' => 'close',
'set-cookie' =>
'_twitter_sess=BAh7BiIKZ0xhc2hJQzonQWN0aW9uQ29udHJvbGxlcjo6Rmxhc2g6OkZsYXNo
%250ASGFzaHsABjoKQHVzZWR7AA%253D
%253D--1164b91ac812d853b877e93ddb612b7471bebc74; domain=.twitter.com;
path=/',
'cache-control' => 'no-cache,
max-age=300',
'status' => '401
Unauthorized',
'date' => 'Sat, 16 May 2009
01:57:55 GMT',
'vary' => 'Accept-Encoding',
'client-ssl-cert-issuer' => '/
C=US/O=Equifax Secure Inc./CN=Equifax Secure Global eBusiness CA-1',
'client-ssl-cipher' => 'DHE-
RSA-AES256-SHA',
'client-peer' =>
'128.121.146.100:443',
'client-warning' => 'Missing
Authenticate header',
'client-date' => 'Sat, 16 May
2009 01:57:55 GMT',
'client-ssl-warning' => 'Peer
certificate not verified',
'content-type' => 'text/html;
charset=utf-8',
'server' => 'hi',
'client-response-num' => 1,
'content-length' => '43',
'client-ssl-cert-subject' => '/
C=US/O=twitter.com/OU=GT09721236/OU=See www.rapidssl.com/resources/cps
(c)08/OU=Domain Control Validated - RapidSSL(R)/CN=twitter.com',
'expires' => 'Sat, 16 May 2009
02:02:55 GMT'
  }, 'HTTP::Headers' ),
 '_msg' => 'Unauthorized',
 '_request' => bless( {
'_content' => '',
'_uri' => bless( do{\(my $o =
'https://twitter.com/statuses/update.json?
oauth_consumer_key=K9ICZr8UwHCVza91AH9Sg&oauth_nonce=2AIYDaoQyknJ5Cpq&oauth_signature=W
%2BQu6CG7ENoVNghVyNU4DX%2B2LJM%3D&oauth_signature_method=HMAC-
SHA1&oauth_timestamp=1242439075&oauth_token=15385100-
snbvmpiROaexwcJx00gkCegiBwX481bvGsVOmRo8e&oauth_version=1.0&status=Test
+message')}, 'URI::https' ),
'_headers' => bless( {
   'user-
agent' => 'libwww-perl/5.808',
 
'content-type' => 'application/x-www-form-urlencoded',
 
'content-length' => 0
 },
'HTTP::Headers' ),
'_method' => 'POST'
  }, 'HTTP::Request' )
   }, 'HTTP::Response' );


On Apr 30, 6:39 pm, Mario Menti  wrote:
> On Thu, Apr 30, 2009 at 11:22 PM, Jesse Stay  wrote:
> > I just wanted to bring back attention to this.  Has anyone on the list
> > gotten Twitter's OAuth to work with Perl?  Care to share some code examples?
>
> I'm using Perl's Net::OAuth heavily, but only for updating twitter status
> with existing access tokens (as my backend processing is Perl, while the
> frontend is RoR, so authorisation/key exchange is handled through rails
> OAuth).
>
> I did find one bug which I've reported back to the Net::OAuth CPAN
> maintainer, who said he'll implement in a future release:
>
> The issue relates 
> tohttp://code.google.com/p/twitter-api/issues/detail?id=433#c32(there's lots
> of useful into in this thread)
>
> The problem occurs when you pass an extra_param containing certain Unicode
> characters. What happens is that the parameter is passed to the signature
> creation, and the signature ends up wrong, leading to 401 errors when trying
> to make a request.
>
> The fix for this is actually detailed in the above thread, a problem with
> the regexp doing the escaping. In Perl's case, the below change
> to Net::OAuth's Message.pm fixes this:
>
>     sub encode {
>        my $str = shift;
>        $str = "" unless defined $str;
>        # return URI::Escape::uri_escape_utf8($str,'^\w.~-');
>        # MM, fix based on twitter OAuth bug report
>        return URI::Escape::uri_escape($str,'^0-9a-zA-Z\d._~-');
>     }
>
> I'm not sure if this is relevant to you given your previous messages, but
> thought I'd share just in case. With this fix implemented, it seems to work
> very well, more than 10,000 of my users have migrated to OAuth and I'm doing
> hundreds of thousands OAuth-based status update requests, without obvious
> problems.
>
> Mario.


[twitter-dev] New API wrapper for Ruby: Chirpy

2009-05-15 Thread Andrew Smith

Hello everyone,

You might be interested to take a look at Chirpy, which is an API
wrapper for Ruby which I wrote.

You can find it on github here: http://github.com/ashrewdmint/chirpy/
There's also a complete class reference here: 
http://ashrewdmint.com/code/chirpy/

This was something I did for fun, and it doesn't currently support
OAuth or image uploading (so if you're looking for either of those
features, you'll want to look for another Ruby API wrapper). But, I
tried to make it super easy to use (and very enjoyable, too).

It uses RestClient (HTTP stuff) and Hpricot (XML parsing) for most of
the heavy lifting, and it's only 659 lines (comments included).

Anyway, take a look if you wish. I hope you enjoy it!

—Andrew