[NTG-context] Chemical structure formula flows out
Hello I've been studying PPCHTEX these days, it's very powerful, and I cannot control it well. I just wrote a chemical struture like: \startchemical [size=small,scale=small,width=fit,height=fit,frame=on] \chemical[SIX,SB13456,DB2,Z] [C,C,C,C,C,C] \chemical[PB:Z1,ONE,Z0,SB18,Z8,MOV1,ONE,DB1,SB7,Z07,MOV1,ONE,SB13,Z03,MOV1,ONE,SB1,DB7,Z017,PE] [C,H,C,H,C,H,C,CH_3,O] \chemical[PB:Z2,ONE,Z0,SB1,Z1,PE] [C,CH_3] \chemical[PB:Z3,ONE,Z0,SB3,Z3,PE] [C,H] \chemical[PB:Z4,ONE,Z0,SB45,Z45,PE] [C,H,H] \chemical[PB:Z5,ONE,Z0,SB56,Z56,PE] [C,CH_3,H] \chemical[PB:Z6,SIX,Z3,SB23,Z24,PE] [C,CH_3,CH_3]\stopchemical only the carbon-ring is in the frame, I think maybe PB:..,PE pair does not change the size of the picture. I know I can use like \startchemical [size=small,scale=small,width=9000,height=5000,left=3000] to specify the size, but I need to try several times to get a correct version. Is there any parameter that do this staff automatically? Thanks -- Sincerely yours,Chen Zhi-chu Chen | Shanghai Synchrotron Radiation Facility No. 2019 | Jialuo Rd. | Jiading | Shanghai | P.R. China tel: 086 21 5955 3405 | zhichu.chen.googlepages.com | www.sinap.ac.cn ___ ntg-context mailing list ntg-context@ntg.nl http://www.ntg.nl/mailman/listinfo/ntg-context
[NTG-context] combining multiple accents
Hello, I always thought that "any accent can be placed on any character" in TeX. However, probably due to some boxes, this fails to work sometimes. So how can I create "a with cedilla and ring above" for example: \r{\c a}? (Yes, I know that I can switch the order: \c{\r a} works as expected, but I need to create more complicated cases in general and I don't want to depend on whether the accent will be placed properly or not.) Is there any simple cure for that? Thanks a lot, Mojca ___ ntg-context mailing list ntg-context@ntg.nl http://www.ntg.nl/mailman/listinfo/ntg-context
Re: [NTG-context] Header number separator
On Thu, 9 Nov 2006, Jeff Smith wrote: > On 11/7/06, Aditya Mahajan <[EMAIL PROTECTED]> wrote: > > >> I need this functionality for a project (IEEE conference style), so >> here is hack to get the feature. The referencing also works. >> >> Use with caution, can break existing macros. > > Wow, thanks a lot! This works as expected. In what situation can it > break existing macros? I intend to use that extensively but in a > fairly simple document (a thesis... yeah, another one in ConTeXt!). Is > there anything I should _not_ do? In principle, it should work fine for european languages. Lot of the trickery with numbers and number formats is present because ConTeXt also supports other languages like chinese and arabic. @@longsectionnumber is used a lot by the sectioning macros, and I do not completely understand what is happening here. My solution was based on trial and error and figuring out what works. Moreover, it changes a core feature of ConTeXt. I am associating separators with sectioning levels rather than with heads. Right now, in principle, you can have different separators for different heads at the same level. For example \setuphead[remark][section=section-4,separator=.] \setuphead[note][section=section-4,separator=-] With this change, this will no longer work. So, the macro is not backward compatible, and thus can break existing code. If you have only one head at each sectioning level, and do not plan to use Chinese or Arabic, it should work fine. Atleast for my simple, 5 page document, it works correctly :-) Aditya ___ ntg-context mailing list ntg-context@ntg.nl http://www.ntg.nl/mailman/listinfo/ntg-context
Re: [NTG-context] postoned text and headers
Any ideas on this? I have tried everything I can think of, to turn off headers on a postponed page, including using page[blank], etc. but nothing seems to work quite right. Attached is a sample showing the issue. This is one of the last issues I have (I think). Thanks, paul On 11/7/06, Paul Jones <[EMAIL PROTECTED]> wrote: Hello everyone, I have a tex document where I postpone (\startpostponing) certain pages until later in the document. On those pages I would like the headers/footers turned off. I have tried \noheaderandfooterlines, and setting the header to high, but neither seem to work. Have I done something wrong or is there another way to do this? Here is an example of what I tried to describe. %output=pdf \setuppagenumbering[conversion=numbers, location=] \setupheadertexts[pagenumber][][][pagenumber] \starttext \startpostponing[+2] \page \noheaderandfooterlines %this will generally be a fullbleed picture or set of pictures on a page by themselves \framed[frame=on, width=\textwidth, height=\textheight] {\dorecurse{1}{\input zapf}} \page \stoppostponing \dorecurse{20}{\input davis } \stoptext Thanks in advance for any help, paul headers.tex Description: TeX document ___ ntg-context mailing list ntg-context@ntg.nl http://www.ntg.nl/mailman/listinfo/ntg-context
Re: [NTG-context] unic-xxx.tex glyph lists: minor bugs, questions
> > > The best way out would be if I could enable ConTeXt's UTF-8 regime while > > > running XeTeX in \XeTeXinputencoding=bytes mode, but I haven't gotten > > > that to work yet. That would mean that you loose the whole range of glyphs & scripts outside of the scope which ConTeXt supports (you would land almost at the level of pdfTeX again). For most european users that might still be something reasonable, but I wouldn't go that way. > > maybe mojca has (little correction to what I wrote in my previous mail) If you were really looking for that part of code - simply replace \expandafter \endinput inside XETEX block in regi-utf.tex with \XeTeXinputencoding=bytes. Then \enableregime[utf-8] will mean that ConTeXt took control over utf instead of XeTeX. From what I understood on the wiki, it probably used to be that way at the beginning, but then Hans changed his mind and decided to ignore \enableregime[utf] completely when processing with XeTeX. Mojca ___ ntg-context mailing list ntg-context@ntg.nl http://www.ntg.nl/mailman/listinfo/ntg-context
Re: [NTG-context] Header number separator
On 11/7/06, Aditya Mahajan <[EMAIL PROTECTED]> wrote: > I need this functionality for a project (IEEE conference style), so > here is hack to get the feature. The referencing also works. > > Use with caution, can break existing macros. Wow, thanks a lot! This works as expected. In what situation can it break existing macros? I intend to use that extensively but in a fairly simple document (a thesis... yeah, another one in ConTeXt!). Is there anything I should _not_ do? Thanks again for your help! It's always greatly appreciated. Jeff ___ ntg-context mailing list ntg-context@ntg.nl http://www.ntg.nl/mailman/listinfo/ntg-context
Re: [NTG-context] unic-xxx.tex glyph lists: minor bugs, questions
On 11/5/06, Hans Hagen wrote: > Philipp Reichmuth wrote: > > I've been writing a script that sifts through the unic-xxx.tex files to > > get a readable mapping what Unicode characters are supported using > > \Amacron-style names. > > > mtxtools can create such lists using the unicode consotium glyph table, > mojca's mapping list and enco/regi files > > we use mtxtools to create the tables needed for xetex (used for case > mapping) and luatex (more extensive manipulations) I have mtxtools.bat, but no mtxtools.rb here. > > Are the > > unic-xxx files automatically generated or maintained by hand? > > > maintained by hand, again, just send me the fixed file, but we need to > make sure that the fix is ok (i.e. works as expected) Although there should be no reason for not generating them automatically. I did that for regime files (I only wrote a script, executed it and Hans included the files, so it's only semi-automatic; it would be polite from me if I managed to incorporate that into existing [whateverthename]tools.rb). > > Incidentally, it would be trivial now to put the list of ConTeXt glyphs > > on the Wiki, if anyone's interested. > > > there is a file contextnames.txt in the distributions (maintained by > mojca), while the not yet distributed char-def.lua has the info for luatex If you find errors there, please let me know. (Missing letter in Cyrillic was due to missing position in Unicode). > > I wanted to use this to work towards better support for the whole range > > of ConTeXt glyphs with OpenType fonts under XeTeX, by reading what > > ConTeXt glyphs are available in a font and building a list of > > "\catcode`ā=\active \def ā {\amacron}"-style list for the rest. > > (Unfortunately this kind of list would be font-specific, but the generic > > alternative would be a huge list of active characters with an > > \ifnum\XeTeXcharglyph">0 macro behind it, and that would probable be > > quite slow.) I wonder if there is a more intelligent way to achieve > > this goal; since part of the logic for mapping code points into glyph > > macros exists already, it would be easier if there was a way to reuse that. > > > best take a look at mtxtools; if needed we can generate the definitions > ; concerning speed, it will not be that slow, because tex is quite fast > on such tests (unless XeTeXcharglyph is slow due to lib access); the > biggest thing is to make sure that things don't expand in unwanted ways. > > (i must find time to update my xetex bin ; i must admit that i never > tried to use open type fonts in xetex (the mac is broken) But OpenType fonts also work on Linux & Windows. > > The best way out would be if I could enable ConTeXt's UTF-8 regime while > > running XeTeX in \XeTeXinputencoding=bytes mode, but I haven't gotten > > that to work yet. > > > maybe mojca has You could theoretically comment out \beginXETEX \expandafter \endinput \endXETEX in regi-utf.tex, but that's not the best idea. Mojca ___ ntg-context mailing list ntg-context@ntg.nl http://www.ntg.nl/mailman/listinfo/ntg-context
Re: [NTG-context] Unicode stuff (was: Re: Specifying BibTeX engine)
On 11/4/06, Philipp Reichmuth wrote: > I've been starting to reuse some of this work in a script to do active > character assignment for XeTeX depending on what glyphs are present in > an OpenType font, so that those characters for which the font doesn't > have a glyph are generated by ConTeXt. Basically I want to produce > something like this: > > \ifnum\XeTeXcharglyph"010D=0 > \catcode`č=\active \def č{\ccaron} > \else > \catcode`č=\letter > \fi % ConTeXt knows this letter -> better hyphenation > > \ifnum\XeTeXcharglyph"1E0D=0 > \catcode`ḍ=\active \def ḍ{\b{d}} > \else > \catcode`ḍ=\letter > \fi % ConTeXt doesn't know this letter No reason for not adding it. > (with \other, respectively, for non-letters). Being somewhat of a > novice to TeX programming, I'm not sure if this will work, though, and > I'm also not sure if it's better to generate static scripts that do this > for every font (so the resulting TeX file is a font-specific big list of > \catcode`$CHARACTERs) or to do this dynamically on every font change, > maybe limited to selectable Unicode ranges (which is more general but > also a lot slower). Generating this for every single font would be stupid. This should be part of low-level XeTeX (Jonathan has promised to look into it some time). In my opinion the best way to deal with it would be the ability to define a fall-back definition for "every" missing letter in a font. Consequently, if you have "ddotbelow" missing in your font, XeTeX would ask ConTeXt if some fallback definition has been provided for that glyph, If yes, it would fall back to it, "\b{d}", but if the glyph would be present in that font, XeTeX would use it. > > I'd prefer to see a context encoding added to GNU recode for the > > benefit of future archeologists trying to decipher ancient documents. > > That would be better I guess, but isn't ConTeXt encoding a moving target > in that characters can still get added? Or is the list fixed to AGL > glyph names and nothing else? No, it's certainly not fixed to AGL. But I wouldn't object adding it to GNU recode (on top of "(La)TeX" which also recognizes \v, \b, ...) if someone would decide to make a good revision of it and if more people think that it would be useful (and if developers are open to that idea). I try to use Unicode when writing sources whenever possible. Mojca PS for Philipp: I didn't try out your definitions, but you have a cut out of an older conversation as an example of what certainly doesn't work under XeTeX ;) (answer was written by Jonathan Kew) I was trying write a few macros to support the old tfm-based fonts, but figured out that that was the wrong starting point (and also other reason than yours). > \catcode`ð=\active \defð{^^f0} > \starttext > Testing ... ð > \stoptext > > and it seems to enter some infinite loop when ð is encountered (I can > define any other letter as well, but only ^^f0 is causing problems). No, this seems to me like it's the wrong way to define the character! And I think you would have the same problem with other letters if trying to define them as their own codes; the ones that work for you must be getting defined as *different* codes from the original input. The ^^xx notation is converted to a literal character by TeX's input scanning routine, so it behaves exactly as if it were that character itself. And ^^f0 in Latin-1 (or Unicode) is the ð character. So this definition works exactly the same as if you were to say \catcode`ð=\active \defð{ð} which is clearly recursive. Given that you don't need to remap ð in the input to some other Unicode character for printing, there should be no need for this at all. The only reason to use a definition like this would be if the input text used a *different* character where you want to print eth; or you want to print something *other* than character F0 for the input ð. In general, a "safe" form of the definition would be to use \chardef: \catcode`ð=\active \chardefð="F0 This makes ð into a macro that expands to the character "F0; there is an important difference between this and ^^f0, which actually "becomes" the character ð itself as the input is read (and therefore inherits its catcode, definition, etc). ___ ntg-context mailing list ntg-context@ntg.nl http://www.ntg.nl/mailman/listinfo/ntg-context