On 07.06.2012 10:57, Sam Hu wrote:
Greeting!

The document on this website provide an example on how to get webpage
information by std.net.curl.It is quite straightforward:

[code]
import std.net.curl, std.stdio;

void main(){

// Return a string containing the content specified by an URL
string content = get("dlang.org");

It's simple this line you "convert" whatever site content was to unicode. Problem is that "convert" is either broken or it's simply a cast whereas it should re-encode source as unicode. So the way around is to get it to array of bytes and decode yourself.


writefln("%s\n",content);

readln;
}
[/code]

When I change get("dlang.org") to get("yahoo.com"),everything goes
fine;but when I change to get("yahoo.com.cn"),a runtime error said bad
gbk encoding bla...

So my very simple question is how to retrieve information from a webpage
which could possibily contains asia font (like Chinese font)?

I think it's not "font" but encoding problem.

Thanks for your help in advance.

Regards,
Sam


--
Dmitry Olshansky

Reply via email to