On Thu, Nov 11, 2021 at 05:31:15PM +0100, Страхиња Радић wrote:
> (for example, if a simple "cat /some/file" for a multi-line text file
> has a delay anywhere from 500 ms to a second or two between the output
> of individual lines, when not dependant on factors such as reading
> from a network of a
On 21/11/10 08:55, NRK wrote:
> I wouldn't say it's "critical need". And if we judge from that pov then
> one could ask, "What's the critical need for a dynamic window manger or
> minimal softwares in general?".
Terminal emulator's job is to allow terminal input/output. Latency is simply not
relev
On Tue, Nov 09, 2021 at 02:00:57PM +0100, Laslo Hunhold wrote:
> I'm always wondering: What do you suggest to improve the
> latency-situation?
If I knew the answer to that, then I would've ditched XTerm and patched
ST already. Unfortunately I know next to nothing when it comes to the
inner working
On 21/11/09 02:00, Laslo Hunhold wrote:
> I'm always wondering: What do you suggest to improve the
> latency-situation? Can we even be "better" than the screen's framerate?
I'm wondering what's the use case for such critical need for low latency?
Playing DOOM (2016) in a terminal with aalib? That'
On 21/10/29 12:18, Dmytro Kolomoiets wrote:
> Страхиња Радић, do you have a cleaned up version of the patch
> which applies to latest st tree without rejecting hunks?
No, but it shouldn't be too hard to make given the PR. I have applied it to my
fork of st (https://git.sr.ht/~strahinja/st).
sign
> https://github.com/LukeSmithxyz/st/pull/224
Страхиња Радић, do you have a cleaned up version of the patch
which applies to latest st tree without rejecting hunks?
On Wed, 27 Oct 2021 at 23:12, NRK wrote:
>
> On Wed, Oct 27, 2021 at 09:38:41AM +0200, Hiltjo Posthuma wrote:
> > Its a longstandi
On Wed, Oct 27, 2021 at 09:38:41AM +0200, Hiltjo Posthuma wrote:
> Its a longstanding myth st has input latency issues.
> The common quoted benchmark is wrong.
If we're thinking about the same benchmark then it's also outdated.
But regardless I didn't base my decision on that. Sometimes ago (9-10
The benchmark was done on macOS, if I'm not mistaken
On Wed, Oct 27, 2021 at 03:52:09AM +0600, NRK wrote:
> On Tue, Oct 26, 2021 at 07:51:52PM +, Ian Liu Rodrigues wrote:
> > I've noticed that in some situations wide characters are being cropped
> > on my terminal. The following script, which uses a wide character from
> > the "Nerd Font Symbol"[
On Tuesday, October 26th, 2021 at 17:27, Страхиња Радић
wrote:
> For me, this patch fixed the glyph truncation:
>
> https://github.com/LukeSmithxyz/st/pull/224
>
> Perhaps someone could add this to the official patches?
Thanks! I will try applying that patch.
On Tue, Oct 26, 2021 at 07:51:52PM +, Ian Liu Rodrigues wrote:
> I've noticed that in some situations wide characters are being cropped
> on my terminal. The following script, which uses a wide character from
> the "Nerd Font Symbol"[1], shows a test case:
>
>
> echo -e '\e[31m \e[0m c'
Dear all,
This is my first post here after two failed attempts, I think because
of the email being sent as HTML. Lets hope this one goes alright.
I've noticed that in some situations wide characters are being cropped
on my terminal. The following script, which uses a wide character from
the "Nerd
n 21/10/26 07:51, Ian Liu Rodrigues wrote:
> echo -e '\e[31m \e[0m c'
> echo -e '\e[31m \e[0mc'
>
>
> Here is a screenshot of the script's output: https://qu.ax/3SBs.png
For me, this patch fixed the glyph truncation:
https://github.com/LukeSmithxyz/st/pull/224
Perhaps someone could ad
random...@fastmail.us dixit:
>Those systems aren't using wchar_t *or* wint_t for unicode, though.
Do not assume that.
tg@blau:~ $ echo '__STDC_ISO_10646__ / __WCHAR_TYPE__ , __WCHAR_MAX__' | cc -E -
# 1 ""
# 1 ""
# 1 ""
# 1 ""
29L / short unsigned int , 65535U
>The main reason for wint_t's
On Mon, Apr 15, 2013, at 15:36, Thorsten Glaser wrote:
> Actually, wint_t is the standard type to use for this. One
> could also use wchar_t but that may be an unsigned short on
> some systems, or a signed or unsigned int.
Those systems aren't using wchar_t *or* wint_t for unicode, though.
The ma
Strake dixit:
>In UTF-8 the maximum encoded character length is 6 bytes [1]
Right, but the largest codepoint in Unicode is U-0001,
which is �: F0 9F BF BF in UTF-8.
Most things are in the BMP anyway – for example, the distance
between the lowest and highest encoded glyph in an X11 font
is ro
On Mon, Apr 15, 2013, at 15:16, Strake wrote:
> On 15/04/2013, random...@fastmail.us wrote:
> > On Mon, Apr 15, 2013, at 10:58, Martti Kühne wrote:
> >> According to a quick google those chars can become as wide as 6
> >> bytes,
> >
> > No, they can't. I have no idea what your source on this is.
>
2013/4/15 Strake :
> On 15/04/2013, random...@fastmail.us wrote:
>> On Mon, Apr 15, 2013, at 10:58, Martti Kühne wrote:
>>> According to a quick google those chars can become as wide as 6
>>> bytes,
>>
>> No, they can't. I have no idea what your source on this is.
>
> In UTF-8 the maximum encoded
On 15/04/2013, random...@fastmail.us wrote:
> On Mon, Apr 15, 2013, at 10:58, Martti Kühne wrote:
>> According to a quick google those chars can become as wide as 6
>> bytes,
>
> No, they can't. I have no idea what your source on this is.
In UTF-8 the maximum encoded character length is 6 bytes [
On Mon, Apr 15, 2013, at 10:58, Martti Kühne wrote:
> On Sun, Apr 14, 2013 at 2:56 AM, Random832 wrote:
> > Okay, but why not work with a unicode code point as an int?
>
> -1 from me.
> It is utter madness to waste 32 (64 on x86_64) bits for a single
> glyph.
A. current usage is char[4]
B. int
2013/4/15 Martti Kühne :
> -1 from me.
> It is utter madness to waste 32 (64 on x86_64) bits for a single
> glyph. According to a quick google those chars can become as wide as 6
> bytes, and believe me you don't want that, as long as there are
> mblen(3) / mbrlen(3)...
int is always 32 bits, and g
On Sun, Apr 14, 2013 at 2:56 AM, Random832 wrote:
> Okay, but why not work with a unicode code point as an int?
>
-1 from me.
It is utter madness to waste 32 (64 on x86_64) bits for a single
glyph. According to a quick google those chars can become as wide as 6
bytes, and believe me you don't wa
On 04/14/2013 02:10 AM, Christoph Lohmann wrote:
Greetings.
On Sun, 14 Apr 2013 08:10:22 +0200 Random832 wrote:
I am forced to ask, though, why character cell values are stored in
utf-8 rather than as wchar_t (or as an explicitly unicode int) in the
first place, particularly since the simplest
Random832 writes:
> On 04/13/2013 07:07 PM, Aurélien Aptel wrote:
>> The ISO/IEC 10646:2003 Unicode standard 4.0 says that:
>>
>> "The width of wchar_t is compiler-specific and can be as small as
>> 8 bits. Consequently, programs that need to be portable across any C
>> or C++ compiler shoul
Greetings.
On Sun, 14 Apr 2013 08:10:22 +0200 Random832 wrote:
> I am forced to ask, though, why character cell values are stored in
> utf-8 rather than as wchar_t (or as an explicitly unicode int) in the
> first place, particularly since the simplest way to detect a wide
> character is to cal
On 04/13/2013 07:07 PM, Aurélien Aptel wrote:
The ISO/IEC 10646:2003 Unicode standard 4.0 says that:
"The width of wchar_t is compiler-specific and can be as small as
8 bits. Consequently, programs that need to be portable across any C
or C++ compiler should not use wchar_t for storing Unic
On Sat, Apr 13, 2013 at 11:17 PM, Random832 wrote:
> I am forced to ask, though, why character cell values are stored in utf-8
> rather than as wchar_t (or as an explicitly unicode int) in the first place,
> particularly since the simplest way to detect a wide character is to call
> the function w
I don't mean as in wchar_t, I mean as in characters (generally in East
Asian languages) that are meant to take up two character cells.
I am forced to ask, though, why character cell values are stored in
utf-8 rather than as wchar_t (or as an explicitly unicode int) in the
first place, particul
28 matches
Mail list logo