======================================================================== Date: Mon, 1 Nov 93 10:20:51 GMT Reply-To: "NTS-L Distribution list" From: Sebastian Rahtz Subject: Re: what to implement in e-TeX In-Reply-To: <"leeman.yor.213:28.09.93.13.37.43"@york.ac.uk> support for colour attributes. as i understand it, when TeX stashes away some text for some reason, it includes with it the font information; i'd like it to also record the current colour setting. one approach would be a new primitive \newcolor, like \language, limiting us to 255 colours. so i'd say \newcolor\green and somehow associate that with some colour changing command. than \TeX could associate all saved material with the current colour. i am not explaining this very clearly, i expect. but is there any support for the concept? Sebastian Rahtz ======================================================================== Date: Mon, 1 Nov 93 22:18:20 +1100 Reply-To: "NTS-L Distribution list" From: ecsgrt@LUXOR.LATROBE.EDU.AU Subject: Re: what to implement in e-TeX I'd like Sebastian to further explain his purpose for the color attribute. Also, now that the ice has broken, I have some wishes. An obvious class of extensions to TeX is to exterminate all the hard-wired limitations. And since we're going incompatible, why stick with the existing TFM design? (That means modifying Metafont, but so?) 1. Make memory allocation dynamic. (Colors can occur in any number, then.) 2. Allow unlimited numbers of characters in fonts. (Minimal case: allow 64 K characters per font, for Unicode.) 3. Unlimited numbers of text and math fonts and font families. 4. Garbage collection - or whatever is best - so that the \new... macros can have \dispose... or \free... partners. 5. Boolean variables. 6. Robust \if \else \fi statements. 7. Loop primitives. 8. String storage and manipulation. 9. Decent file I/O. (Add this to MF, too, _please_!) 10. Arbitrary precision floating point. 11. A calculation mode, wherein `a * b' can be used for multiplication, for example. 12. Numeric variables, string variables, records, arrays, user-definable data types. 13. Access to catcode _after_ it's been read. 14. Font encoding, decoding, recoding. Allow characters to be referred to by name, rather than number. (Might as well make TFM files more useful.) 15. Override the `^^f' output notation, and such weird conversions that hinder decent file output. 16. TFM files must contain a TFM variety identification code, like DVI files. 17. Obviously, allow any number of different character heights, depths, and italic corrections. 18. Eliminate the `math axis' restrictions. 19. Tests for current status of nonstopmode, batchmode, etc. 20. Characters and boxes to be treated more symmetrically: for example, characters to be syntactically bona-fide boxes. 21. Accented and other composite characters to be hyphenatable, and otherwise treatable as single objects (compare previous wish). 22. Unlimited numbers of fontdimens. 23. Add to this list, please! If and when I ever understand extensible characters, delimiters, and such, I'm sure I'll want changes there, too, so that the mysterious use of numbers is replaced by a user-comprehensible description. Geoffrey Tobin ======================================================================== Date: Mon, 1 Nov 93 12:52:45 CET Reply-To: "NTS-L Distribution list" From: Joachim Schrod Subject: Re: what to implement in e-TeX In-Reply-To: <199311011121.AA22325@rs3.hrz.th-darmstadt.de> from"ecsgrt@LUXOR.LATROBE.EDU.AU" at Nov 1, 93 10:18:20 pm You wrote: > > Also, now that the ice has broken, I have some wishes. Sorry to say, but IMO your wishes have some fundamentel flaws. I think they imply that TeX is a statement oriented, imperative language. It isn't. In addition, you're mixing the lexical and the syntactical analysis of TeX. The TeX language does *ONLY* handle TOKEN LISTS -- nothing else. You have neither numbers nor strings, etc. In particular, tokens don't have category codes! Any change in this would mean not to enhance TeX, but to write it anew. > 4. Garbage collection - or whatever is best - so that the \new... > macros can have \dispose... or \free... partners. What is `garbage collection' in this circumstance? \new... isn't a TeX primitive. You can write a \dispose... or \free... macro easily. > 5. Boolean variables. What difference to \newif ? > 6. Robust \if \else \fi statements. TeX doesn't have statements. It's a macro language. > 8. String storage and manipulation. TeX doesn't have strings. Only token lists. > 11. A calculation mode, wherein `a * b' can be used for > multiplication, for example. TeX doesn't know expressions, only assignments. (Besides, the syntactical sugar you want has been implemented already.) > 12. Numeric variables, string variables, records, arrays, > user-definable data types. TeX has no numbers. (And no strings, as said above.) > 13. Access to catcode _after_ it's been read. After anything is read, it's a token. A category code is not an attribute of a token. (You might want to have access to the token category, but that's something different.) > 23. Add to this list, please! Before destroying a usable system, understand it first. After all, from a CS viewpoint TeX is a trivial language. One can't introduce things which go against the paradigm of a language. If one wants them (and I fully agree with you that this is the case), one has to _design_ a complete new language. -- Joachim =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Joachim Schrod Email: schrod@iti.informatik.th-darmstadt.de Computer Science Department Technical University of Darmstadt, Germany ======================================================================== Date: Mon, 1 Nov 93 13:56:52 LCL Reply-To: Mike Piff From: Mike Piff Subject: Re: what to implement in e-TeX %>Date: Mon, 1 Nov 93 12:52:45 CET %>Reply-to: NTS-L Distribution list %>From: Joachim Schrod %>Subject: Re: what to implement in e-TeX %>To: Multiple Recipients of %> %>Joachim Schrod wrote, in a damning indictment of both Geoffrey Tobin and %> \TeX\ the program: wrote: %>> %>> Also, now that the ice has broken, I have some wishes. %> %>Sorry to say, but IMO your wishes have some fundamentel flaws. I think %>they imply that TeX is a statement oriented, imperative language. It %>isn't. In addition, you're mixing the lexical and the syntactical %>analysis of TeX. Lexical analysis is just one aspect of syntactical analysis. Do you mean syntactical and semantical? Whilst we are on this subject, does there exist anywhere a complete syntax of \TeX? I couldn't convince myself that it does all exist in the \TeX book, although large chunks of it do exist there. %> %> The TeX language does *ONLY* handle TOKEN LISTS -- nothing else. %> You have neither numbers nor strings, etc. In particular, tokens %> don't have category codes! %> Count registers are numbers, surely! And, in a round-about way, we are allowed to add, subtract, multiply and divide values. %>Any change in this would mean not to enhance TeX, but to write it anew. %> Is that clear to everyone else other than myself and Geoffrey? %>> 4. Garbage collection - or whatever is best - so that the \new... %>> macros can have \dispose... or \free... partners. %> %>What is `garbage collection' in this circumstance? \new... isn't a TeX %>primitive. You can write a \dispose... or \free... macro easily. %> I would question that word ``easily''. \newcount\a \newcount\b \dispose\a \newcount\c \TeX\ has some features in common with the original BASIC language as far as variables go---26 of them, and their names are a--z. (Substitute 256 of them, and their names are 0--255.) %>> 5. Boolean variables. %> %>What difference to \newif ? %> \newif\ifthis \this=(\a<\b)\relax ??????????????????????? %>> 6. Robust \if \else \fi statements. %> %>TeX doesn't have statements. It's a macro language. %> Well, ``statement'' doesn't appear in DEK's index, but he does define lots of assignments on p275 of my book, which anyone else could call assignment statements rather than assignment commands. %>> 8. String storage and manipulation. %> %>TeX doesn't have strings. Only token lists. %> Same objection as to \newcount %>> 11. A calculation mode, wherein `a * b' can be used for %>> multiplication, for example. %> %>TeX doesn't know expressions, only assignments. (Besides, the %>syntactical sugar you want has been implemented already.) %> %>> 12. Numeric variables, string variables, records, arrays, %>> user-definable data types. %> %>TeX has no numbers. (And no strings, as said above.) %> \newcount business again %>Before destroying a usable system, understand it first. After all, %>from a CS viewpoint TeX is a trivial language. What is a ``trivial'' language? %> One can't introduce things which go against the paradigm of a %>language. If one wants them (and I fully agree with you that this is %>the case), one has to _design_ a complete new language. %> So is this what we are supposed to be doing, designing a new language? That was not the message I received a few days ago, about extensions that could be called optionally from within e-tex, but otherwise would leave \TeX\ intact% an ISO-\TeX\ with optional extensions available from the command line. Mike Piff %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% Dr M J Piff %% e-mail: %% Department of Pure Mathematics %% %% University of Sheffield %% M.Piff@sheffield.ac.uk %% Hicks Building %% PM1MJP@derwent.shef.ac.uk %% Hounsfield Road %% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% SHEFFIELD S3 7RH %% Telephone: (0742) 824431 %% England %% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% ======================================================================== Date: Mon, 1 Nov 93 15:18:44 CET Reply-To: "NTS-L Distribution list" From: bbeeton Subject: Re: what to implement in e-TeX In-Reply-To: <01H4SHWS9FW2O2IZOL@MATH.AMS.ORG> geoffrey tobin, in his wish list, includes 18. Eliminate the `math axis' restrictions. could he explain that, please? i've seen math set without paying attention to a single point of orientation, and signs of operation float up and down, willy-nilly, not in relation to fraction lines, etc., etc. it looks *dreadful*, and i certainly wouldn't be proud to publish it. if geoffrey means remove the ability to maintain that sort of order, then i would hope that the axis feature isn't entirely removed, but perhaps is made optional for those who don't need it. if he means something else, i'd like to understand what that is. my own most-wanted feature is the presence of an \underaccent primitive to make it possible to insert cedillas, underdots, ogoneks, and other diacritics in a more reasonable manner both in small quotes from languages that use them (naturally or in transliteration) and in math; neither of these situations is broad enough in scope to warrant a fully-formed font (which is knuth's recommended technique for handling under accents.) while i'm sympathetic with geoffrey's request to up the size of fonts to 64k, i'm highly skeptical about the practicality of this, on two counts: - the typeface styles used for, say, burmese, hangul, devanagari, arabic, ..., are not at all parallel to those used for latin, cyrillic, greek and other western alphabetic fonts. i doubt seriously whether any reputable font supplier (west or east) will provide such a hodge-podge in a single unit, and i would be loath to use such a thing in normal circumstances. - space -- what would be the space requirements for just a complement of serifed + sans-serif fonts in each of four styles (upright, italic, bold, and bold italic) in, say, four sizes? (this is a not atypical situation for the documents i am involved with producing.) -- bb ======================================================================== Date: Mon, 1 Nov 93 14:37:02 GMT Reply-To: RHBNC Philip Taylor From: P.Taylor@RHBNC.AC.UK Subject: In re Geoffrey Tobin, Joachim Schrod, Mike Piff et al... Dear Colleagues --- Before this discussion degenerates too far, let me intervene in the hopes of defusing the situation. The NTS group have met and reached some conclusions about their immediate and long-term proposed activities, and a formal report of the meeting will be published as soon as possible; this is just waiting for consensus to be reached on the minutes. In the meantime, Peter Breitenlohner, a member of the group, has asked for suggestions concerning one immediate goal, i.e. extensions which could sensibly be incorporated within the existing TeX framework. This is not meant to suggest that suggestions that cannot be fitted with the present framework are not wanted, but simply that for the moment the NTS-L list should concentrate on suggestions within the spirit of the existing TeX. Now it may be, and I emphasise may, that some proposals are not possible within the existing TeX framework; if this is the case, then it is right and proper that this should be pointed out, in order to keep the discussion focussed. But I urge anyone wishing to challenge the validity of a proposal to do so in as moderate a manner as possible; we do not wish to alienate members of this list, or discourage them from responding, for fear of having their heads bitten off. Remember that this is an open list --- no particular level of expertise is required to join or to participate --- and so if a member of the list feels that another member's ideas need correcting, then let that be done in a polite and constructive way --- in other words, let us educate each other, not hold each other up to scorn. Philip Taylor, Technical Co-ordinator, NTS Project. ======================================================================== Date: Mon, 1 Nov 93 15:56:00 +0200 Reply-To: "NTS-L Distribution list" From: KNAPPEN@VKPMZD.KPH.UNI-MAINZ.DE Subject: FAQ -- second edition Frequently Asked Questions of NTS-L Second edition Date: 1-NOV-1993 Currently maintained by: knappen@vkpmzd.kph.uni-mainz.de (J%org Knappen) Remark about the format: This faq is divided into several sections and subsections. Each section contains a subsection general with some idaes which have not yet been discussed. I added a date to some subsections to allow you to retrieve fuller discussions from the archives. The transactions of this group are archived on ftp.th-darmstadt.de [130.83.55.75] *) directory pub/tex/documentation/nts-l Each file in this directory is named yymm, where (guess :-) yy is the year and mm is the month when the mail arrived. (I.e., all postings of one month are bundled in one file.) *) Avoid using the number above ... it is subject to changes. -1. Contents 0. About NTS 1. Proposed features of a New Typesetting system 1.1. Improvement of Quality 1.2. Internationality 1.3. New Look and Feel 2. Proposed additions to TeX (concrete new primitives) 2.1. \lastmark etc. 2.2. \system 2.3. \skylineskiplimit, \skylinehorizontallimit 2.4. \directioncode 2.5. \textcode 2.6. \afterfi 2.7. \currentinteractionmode 3. Metaremarks 3.1. TeX is not perfect 3.2. In which language shall NTS be written 4. Deviations 4.1. Automated Kerning 4.2. About Lout 0. About NTS (Mar 93, see also Jul 92) At DANTE '93, held at the Technical University Chemnitz last week, Joachim Lammarsch, President of Dante, announced that the NTS project which had been started under the aegis of DANTE, was to be re-formed under a new co-ordinator, Philip Taylor. The old core group, announced at the previous annual DANTE meeting, was to be dissolved, and a new core group established. Membership of the new core group will not be restricted to DANTE members, but will instead be offered to various well-known names (and some lesser-known!) in the international TeX community. see also: F. Mittelbach: E-TeX Guidelines for future TeX, TUGboat v11n3 (1990) P. Taylor: The future of TeX, EuroTeX'92 Prag (Proceedings) 1. Proposed features of a New Typesetting system 1.1. Improvement of Quality 1.1.0 General: Optimised page breaking, avoiding ``rivers'', letterspacing (see also 4.1), Hyphenation (Haupt- und Nebentrennstellen), grid typesetting 1.1.1 Skyline approach to line breaking (Mar 93) You can break paragraphs as usual with the current model, where all lines are simple rectangular boxes. If there's no necessity to insert \lineskip, then you don't have to look at the skyline. Only if two lines are too near (e.g. distance<\lineskiplimit), you have to look into the two rectangular boxes and to check if the boxes inside overlap at one or more places. For the worst case (i.e., you have to look at the skyline for all pairs of lines) processing the skyline model consumes a lot of process time, but this shouldn't hinder us to test this idea and look at the results. Btw, the skyline model seems to be easy to implement in the current TeX, because we need only some changes when the finally broken lines of the paragraph are put on the vertical list. There are more changes needed in the code, if the line break should be changed for the cases where it is possible to avoid an overlap with other break points, but IMHO it's nonetheless a relatively small change. Additionally you have to introduce some new parameters. I think of something like: \skylineskiplimit (b) minimum vertical distance between two boxes \skylinehorizontallimit (a) minimum horizontal distance line 1: ------------ | | | | ---------- ------- <== (a) ==> | | ^ | | (b) | ------- v | ---------------------- line 2: and other parameters, but the necessary parameter set, realization, etc. for "skylines" are subject of discussion. 1.2. Internationality 1.2.0 General: Typesetting in arbitrary directions, unicode support (16bits), ISO10646 support (32bits), ligatures pointing to other fonts, vertical kerning, better accent handling (\topaccent and \botaccent) 1.2.1 Supporting TeX--XeT primitives for right to left typesetting TeX--XeT is an existing extension to TeX supporting right-to-left typesetting an producing a usual dvi-file. TeX--XeT is written by P. Breitenlohner and freely available. It is different from TeX-XeT (one hyphen only). Allthough TeX will be frozen at version $\pi$, this is not true for TeX--XeT. 1.3. New Look and Feel 1.3.0 General Windows support, wysiwyg-like features 1.3.1 Interaction with the operating system and other programmes see 2.2. \system 2. Proposed additions to TeX (concrete new primitives) 2.0. General (Jun 92, Jul 92, Aug 93) A rather long list of proposed primitives (more or less worked out) was posted by Karl Berry on 10-Jun-1992. It contains suggestions like: \elseif (selfexplaining), \format{foo} (allow the author to select a format), \host{name} \host{os} \host{type} \getenv to extract host information \TeXversion, \usertime, \everyeof, and others. It is currently not possible to get some information about the current mode of TeX and make conditionals dependent on it and/or restore it after some action (see 2.7. \currentinteractionmode) 2.1. \lastmark etc. (Jun 92, Jul 92) Currently you cannot remove a \write or \mark or \insert or rule from any list at all. If we allow them to removed, how will the commands appear to the user? If we have \lastmark like \lastbox, then perhaps we need a mark data type so that we can say something like \setmark0=\lastmark. It will probably be difficult in the case of \insert's to think of a good command syntax. Perhaps \lastpenalty, \lastkern, \lastskip should remove the penalty, kern, skip, ... so that they are consistent with \lastbox. Then \unpenalty, \unkern, and \unskip would be unnecessary. (Of course most macro packages would probably want to reimplement them, as macros: \def\unpenalty{\count@\lastpenalty}, \def\unkern{\dimen@\lastkern}, \def\unskip{\skip@\lastskip}.) 2.2. \system (Mar 93) 2.2.0 General Oops, this got rather longish, but this topic has caused plenty of traffic. I decided to quote directly the positions of both sides. The subpoints are 1. Pro 2. Contra 3. Syntax 2.2.1 Pro First comes the proposal as formulated by Phil Taylor: There has been much discussion on how a \system primitive might interact with different operating systems, each with different functionality and a different syntax. My idea was to extend the concept of a `TeX implementation', which at the moment implies the creation and application of a change-file to the master TeX source, to include an implementation-specific macro library. Thus each implementor, as well as creating and applying a change file, would also be responsible for mapping a well-defined set of macros, through the \system primitive, to the syntax and functionality of the operating system for which he or she has assumed responsibility. To cite a specific example: Assume that in e-Lib (a hypothetical macro library to accompany e-TeX), a macro \sys$delete_file {} is partially defined; then each implementor would be responsible for mapping \sys$delete_file { to his or her own implementation of \system. e-Lib would define the effect and the result(s), \system would provide the interface, and the implementor would be responsible for providing the mapping. The question has been asked: ``Why via \system and macros? Why not via explicit primitives to carry out the various functions that are envisaged?'' To which I would suggest that the answer is ``Because `the various functions which are envisaged' is both enormous (requiring many new primitives), and yet not large enough (because no matter what functionality we posit, someone will come up with an idea that has not been considered).'' By implementing just one \system primitive, and an extensible e-Lib macro library, one can create a robust and well-tested e-TeX whilst allowing new system interactions to be added at the simplest points: through the implementation-independent and implementation-specific components of e-Lib. 2.2.2 Contra And here's from the ``Minority Report'' (Tim Murphy and J"org Knappen) May I recall the immortal words of Ken Thompson, "A program should do one thing, and do it well." (TM) I don't like the hackers to decide, making eTeX yet another programme from which I can send e-mail and read news :-) Maybe people will tell me eTeX is a fine operating system, but TeX version $\pi$ is the better typesetter :-) But there is another side of \system, I want to call it the monstrosity side. Many people are thinking now, that TeX is a monster and difficult to tame. \system will add to this monstrosity. It will create a new paradise for hackers creating system hacks. And it will make people turn away from eTeX and use other products, even if they are far less secure. (JK) 2.2.3 Syntax If a \system command is required, should it not have a similar syntax and semantics to the a similar TeX command. I can't think of anything else in TeX (prepares to be shown wrong) that expands in the mouth and has side-effects. Should it not be like \read, \write etc. that is it generates a whatsit that is obeyed at shipout, unless preceeded by an \immediate, in which case it is done immediately by the stomach. There seem to be two obvious syntaxes, one like \write: \system{foo} or \immediate\system{foo} and one like \read: \system{foo} to \baz or \immediate\system{foo} to \baz The latter one would produce the exit code into \baz. Should this be done with catcode 12 characters, or should it be done like \read, with the current catcodes? 2.3. \skylineskiplimit, \skylinehorizontallimit see section 1.1.1 2.4. \directioncode (May 1993) A \directioncode (with syntax analogous to \uccode, \lccode, sfcode) to be assigned to each input character. The basic ones are 0 -- transparent (space, full stop...) 1 -- left-to-right (latin letters, digits...) 2 -- right-to-left (hebrew letters, arab letters...), a truely international NTS will also have codes for vertical typesetting and some special cases. The question is how to use this idea consistently. One could extend the notion of TeX's modes. Horizontal mode is in fact left-to-right mode, a right-to-left mode is missing. To be complete, this mode will be acquipped with boxen and all the stuff a TeX's mode has. At the beginning of a paragraph NTS decides which mode to choose by the \directioncode of the first input character. Sometimes the first character will have the wrong code, in this case the insertion of an explicit control sequence (like \lrbox{}) is necessary. If a character with another directioncode occurs, NTS starts a \rlbox and finishes it as soon as a character with the original \directioncode appears or at the end of the paragraph. For the building of right-to-left tables a \rlalign is needed. 2.5. \textcode (September 1993) Some of the character coding discussions in the Technical Working Group on Multiple Language Coordination and some experiences I've made with `german.sty' (specially the problems with an active doublequote and hex integer constants!) lead to this _incomplete_ proposal/idea for the following addition: Introduce something like \textcode (and \textchar & \textchardef) which are the text (hmode) equivalent of TeX's \mathcode (and \mathchar/\mathchardef) primitives. With an equivalent and appropriate implemented \textcode primitive (with the choice to define a character as "pseudo-active"), it would be possible to * relate characters to different fonts (using a generalized `fam' of \mathcode) * suppress expansion of active characters (it will only be expanded, if it is read to form the hlist) (using an equivalent \mathcode="8000 value) [This point allows the use of e.g. an pseudo-active " which expands to non-expandable tokens and it removes the special construct \relax\ifmmode... for active characters, too.] 2.6. \afterfi (August 1993) In the answer to an exercise of the ``Around the Bend'' series, Michael Downes realised the non-existence of an \afterfi primitive (Note: He did not demand it nor really miss it). Perhaps an \afterfi can simplify some obscure mouth-only macros with nested conditionals??? (IMHO the \afterfi should be expandable, because \if...\fi is expandable.) 2.7. \currentinteractionmode which returns the current interaction mode. A construction like: \let\savedmode\currentinteractionmode \batchmode ..... \savedmode would become possible than. More of this kind are a conditional or primitive to signal, when TeX is in "expand only" mode (\edef, \mark, \write, ...), when TeX is scanning numbers (here I'm thinking---and hating---german.sty's active doublequote, which can also be used as a marker for hexadecimal numbers), when TeX is peeking for some special tokens (first column in an \halign), etc... 3. Metaremarks 3.0. General Remarks about group efforts vs. one person creating software (Mar 93), ALGOL 68 as a warning example 3.1. TeX is not perfect (Jun 92, Jul 92) The discussion has taken place in June and July 1992. Several details were worked out, where TeX could be improved. Another point of criticism was the programming language of TeX in general, several discutants prefer a procedural language over a macro language. 3.2. In which language shall NTS be written (Mar 93) In 1992, there was much discussion, in which language an NTS should be implemented (candidates were LISP, C, and WEB). This has settled in March 1993 (to PASCAL-WEB), because of the acceptance of the idea that rather than wait for an ``all-singing, all dancing'' NTS, the group should develop, in a stepwise manner, small but significant enhancements to TeX. This implies that the enhancements are implemented as change files in WEB. 4. Deviations 4.0. General (empty) 4.1. Automated Kerning (Oct 92) Kindersley's "optical kerning": for the purposes of kerning, each character is replaced by a circle centred on the centre of gravity of that character; the radius of the circle is determined by the fourth moment of the character (that is, the fourth root of the sum over all black pixels of the fourth power of their distance from the centre). On the UKTUG trip to Kindersley's studio, I tried to extract the reason why the fourth, as opposed to third or fifth or whatever, moment is used; the reason is apparently that it "looks right". We can construct elaborate schemes for kerning (Kindersley's fourth moments, FontStudio's (convex?) envelopes, Calamus' eight widths, etc), but the proof of the typographical pudding is in the eating of the resulting words, so to speak. 4.2 About Lout (June 1993) In June 1993, the new system Basser Lout caused several questions and suggestions on this list. The following is taken from a short review of Lout by Bernd Raichle: `Lout' is a (yet another) document formatting system, released under the terms of the GNU General Public License and available on some ftp servers. IMHO it's more like a `troff' (with a better input language and some newer concepts) than a `TeX'. A few citations from the documentation of lout: Lout is a high-level language for document formatting, designed and implemented by the author. The implementation, known as Basser Lout, is a fully operational production version written in C for the Unix operating system, which translates Lout source code into PostScript, a device-independent graphics rendering language accepted by many high-resolution output devices, including most laser printers. [...] When expert users can implement such applications quickly, non-experts benefit. Although Lout itself provides only a small kernel of carefully chosen primitives, Lout has 23 primitive operators... missing, for example, the simplest arithmetical operators (there is only the operator "@Next" which increases a number by one). packages written in Lout and distributed with Basser Lout provide an unprecedented array of advanced features in a form accessible to non-expert users. The features include rotation and scaling, fonts, These features are mostly based on the output language... Postscript (if you are looking inside a Lout package, you find large portions of embedded Postscript code). paragraph and page breaking, TeX does a better job for these two items, because Lout is missing most of TeX's paragraph/page breaking parameters. (Note: Lout uses TeX's hyphenation algorithm and the hyphenation patterns.) displays and lists, floating figures and tables, footnotes, chapters and sections (automatically numbered), running page headers and footers, odd-even page layouts, automatically generated tables of contents, sorted indexes and reference lists, bibliographic and other databases (including databases of formats for printing references), equations, tables, diagrams, formatting of Pascal programs, and automatically maintained cross references. TeX's math setting abilities are better. Lout uses a package named `eq' derived from the `eqn' preprocessor used with `troff'. And there are other packages named `tab' (for tabulars) and `fig' (drawing figures). [...] Lout is organized around four key concepts -- objects, definitions, galleys, and cross references -- [...] The concept of `galleys' and the "expansion" of recursive defintions are IMHO the only new concept in Lout: `galleys' are a way to describe a page, dividing it in certain regions which can be filled from different sources (e.g. a footnote galley is filled with footnote text, etc.). Recursive definitions are very simple, e.g. def @Leaders { .. @Leaders } defines the command (Lout calls it `object') to "expand" to a `..' and if there is place for another "expansion" it is called again. For example \hbox to 4in{Chapter 7 \dotfill 53} is in Lout 4i @Wide { Chapter 7 @Leaders 53 } With this recursive definitions, a whole document is defined as a @PageList consisting of a @Page and a @PageList with an incremented @PageNum. A @Page is defined as a set of `galleys' (header, text body, footnotes, footer), which are also defined as a list of text/footnotes/... and so on. Perhaps others can add more impressions, mine are based on the documentation coming with the Lout package and some tests done in 1-2 hours. The End. ======================================================================== Date: Mon, 1 Nov 93 16:52:55 CET Reply-To: "NTS-L Distribution list" From: Joachim Schrod Subject: Re: what to implement in e-TeX In-Reply-To: <199311011402.AA36901@rs3.hrz.th-darmstadt.de> from "Mike Piff" atNov 1, 93 01:56:52 pm Mike Piff wrote: > > %>Joachim Schrod wrote, in a damning indictment of both Geoffrey Tobin and I don't want to indict anybody. I want to point out problems. (I have high respects for Geoffrey, whom I learned to recognize as a knowledgable and helpful person.) In particular, I answered *because* it was by Geoffrey and not by somebody else whose arguments I don't value. (I would have answered you as well. ;-) Please note, that I *agree* mostly with Geoffrey that one would need facilities like he has outlined. I can even add dozens of other things I want -- especially as a formatter's language. But I want to emphasize that it would create a _new_ language, which has to be designed. Patching the old language is IMO not of much use. Besides, I agree with all his points about dynamic memory allocation. But back to your mail; obviously I was too short in some comments. > %>In addition, you're mixing the lexical and the syntactical > %>analysis of TeX. > > Lexical analysis is just one aspect of syntactical analysis. Do you mean > syntactical and semantical? I'm using classical parser terminology (as outlined, eg, in the Dragon book): The lexical analysis transforms an input stream of characters in a token stream, the syntactical analysis accepts this token stream according to a (usual CH2) grammar, and the semantic analysis -- more precisely, the analysis of ``static semantic'' -- checks constraints on the grammatical rules and fires appropriate actions. > Whilst we are on this subject, does there exist anywhere a complete syntax of > \TeX? I couldn't convince myself that it does all exist in the \TeX book, > although large chunks of it do exist there. I'm not convinced either. (Whatever `syntax' means here. If it means language specification, then the ``summary'' chapters are clearly not enough.) Victor's book is good in this context. > %> The TeX language does *ONLY* handle TOKEN LISTS -- nothing else. > %> You have neither numbers nor strings, etc. In particular, tokens > %> don't have category codes! > > Count registers are numbers, surely! And, in a round-about way, we are > allowed to add, subtract, multiply and divide values. But count registers are not numbers -- they are count registers. (That's the difference between r- and l-value!) The TeX engine knows about numbers, sure -- but that doesn't mean that the language has them as a basic data type. I.e., if you write \count 10 = 153 this is transformed by the lexical analysis -- assuming normal category codes -- to '((cseq . "\count") (char . ?1) (char . ?0) (space . dc) (char . ?=) (space . dc) (char . ?1) (char . ?5) (char . ?3) (space . dc)) (in LISP notation). During the primitive action which is bound to '(cseq . "\count") the sub-list of the last four tokens (four, not three!) is interpreted as a number -- but on the language level it is *not* a number! Without this, macros like \3 in webmac.tex would not work. Note that the symbol I use as the car of a token pair is not a category code. Catcodes are attributes of characters in the input stream. They are not attributes of tokens. These symbols are token types. Although one has a near relationship between catcodes and token types, they are not identical. (Eg, you cannot have a token of type 'escape. And the catcode 'cseq does not exist.) Btw, don't get me wrong: I don't like it. But it's a fact of life... > %>What is `garbage collection' in this circumstance? \new... isn't a TeX > %>primitive. You can write a \dispose... or \free... macro easily. > > I would question that word ``easily''. Hmm, of course it depends on the amount of experience in TeX programming. OK, it's not trivial, but I would not consider it a difficult task. \new... is to be changed to allocate the thing in question from a free list, and \dispose... adds to the free list. Macros for list handling are available all around, I really don't see difficulties. Btw, Chris told me that the LaTeX3 folks did it already... Btw2, this is another good example for the difference of the TeX language and the TeX engine. The language has symbols (sometimes called variables -- but `symbol' is the correct term), the engine has registers. The binding of a symbol (which is, in fact, always a token list) might be interpreted as the identification of a symbol. What we have to do therefore, is register allocation, plain and simple. LRU, FCFS, or LCFS -- all suffice. > \newcount\a > \newcount\b > > \dispose\a \disposecount\a, I hope. > %>> 5. Boolean variables. > %> > %>What difference to \newif ? > %> > > \newif\ifthis > \this=(\a<\b)\relax > > ??????????????????????? Ah -- but here you mixed to things. You want boolean expressions, not boolean variables. To repeat myself: TeX has not such a thing as an expression. (An expression is `something to do'. The TeX language is not build on the paradigm of `to do', but on `to expand'.) > %>> 6. Robust \if \else \fi statements. > %> > %>TeX doesn't have statements. It's a macro language. > > Well, ``statement'' doesn't appear in DEK's index, but he does define lots of > assignments on p275 of my book, which anyone else could call assignment > statements rather than assignment commands. But usually we don't use the term `statement' in macro languages due to its imperative conotation. (Refer, eg, to Peter Brown _Macro processors and techniques for portable software_, Wiley 1974, or to Alfred Cole _Macro processors_, CUP 1981.) In addition, the `assignment' term of Don Knuth is not the very best. (But this goes with `mouth' and `stomach'...) He merges two things under this term: Bindings for symbols and modifications of the registers of the TeX engine. > %>> 8. String storage and manipulation. > %> > %>TeX doesn't have strings. Only token lists. > > Same objection as to \newcount ??? I don't understand this. You're objection to \newcount was that I said it's `easy'. Or did I miss something? > %>Before destroying a usable system, understand it first. After all, > %>from a CS viewpoint TeX is a trivial language. > > What is a ``trivial'' language? A language which does not have many concepts. OK, I was sloppy in my terms -- but this was not a scientific article. It's a language with few concepts: The TeX language belongs to the Lisp family. (Everybody remembers \require? ;-) In particular, it's a list-based macro language with late binding. (Actually, that's all one needs to say to characterize it.) Its data constructs are simpler than in CL: `token list' is the only first order type. Glue, boxes, numbers, etc., are engine concepts; instances of them are described by token lists. Its lexical analzsis is simpler than CL: One cannot program it. One can only configure it. Its control constructs are simpler than in CL: Only macros, no functions. And the macros are only simple ones, one can't compute in them. For those who worked with CL in advance, TeX is a language that is easy to learn. All capabilities are already known and familiar -- it's just a bit awkward to use, because it is so stripped down. Pitty, that most folks don't learn CL, but stay with languages from the Algol family. Then TeX is difficult, of course -- one has to learn all these new concepts. > %> One can't introduce things which go against the paradigm of a > %>language. If one wants them (and I fully agree with you that this is > %>the case), one has to _design_ a complete new language. > > So is this what we are supposed to be doing, designing a new language? That > was not the message I received a few days ago, Exactly, I neither. That was the reason I sent the reply to Geoffrey. -- Joachim =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Joachim Schrod Email: schrod@iti.informatik.th-darmstadt.de Computer Science Department Technical University of Darmstadt, Germany ======================================================================== Date: Mon, 1 Nov 93 17:34:22 +0100 Reply-To: Mike Piff From: Mike Piff Subject: Re: what to implement in e-TeX %>> Count registers are numbers, surely! And, in a round-about way, we are %>> allowed to add, subtract, multiply and divide values. %> %>But count registers are not numbers -- they are count registers. %>(That's the difference between r- and l-value!) The TeX engine knows %>about numbers, sure -- but that doesn't mean that the language has %>them as a basic data type. I.e., if you write %> %> \count 10 = 153 %> %>this is transformed by the lexical analysis -- assuming normal %>category codes -- to %> %> '((cseq . "\count") %> (char . ?1) (char . ?0) (space . dc) %> (char . ?=) (space . dc) %> (char . ?1) (char . ?5) (char . ?3) (space . dc)) %> %>(in LISP notation). During the primitive action which is bound to %>'(cseq . "\count") the sub-list of the last four tokens (four, not %>three!) is interpreted as a number -- but on the language level it is %>*not* a number! %> Now I see where we differ! You are saying that after macro expansion is over, there is still only a token list there. But then TeX primitives get to work and do the actual calculations. Fair enough, but I would be happier if TeX primitives were there to carry out real calculations, string calculations, etc, too. Note: happier :-/ not happy :-) \newreal\a \newreal\b \newstring\c \b=2.71828 \a=(1.766766666+\b)/2\relax \c="\jobname" \c=\c+".tex" This is \the\c, and the answer is \the\a. or whatever nonsense is admissible in an extension to TeX. Mike Piff %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% Dr M J Piff %% e-mail: %% Department of Pure Mathematics %% %% University of Sheffield %% M.Piff@sheffield.ac.uk %% Hicks Building %% PM1MJP@derwent.shef.ac.uk %% Hounsfield Road %% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% SHEFFIELD S3 7RH %% Telephone: (0742) 824431 %% England %% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% ======================================================================== Date: Mon, 1 Nov 93 17:54:21 +0100 Reply-To: Mike Piff From: Mike Piff Subject: Re: what to implement in e-TeX %>> %>> 5. Boolean variables. %>> %> %>> %>What difference to \newif ? %>> %> %>> %>> \newif\ifthis %>> \this=(\a<\b)\relax %>> %>> ??????????????????????? %> %>Ah -- but here you mixed to things. You want boolean expressions, not %>boolean variables. To repeat myself: TeX has not such a thing as an %>expression. (An expression is `something to do'. The TeX language is %>not build on the paradigm of `to do', but on `to expand'.) %> I disaggree! I see it as ``to expand'' and then ``to do'', ie, both. Knuth uses the mouth/stomach analogy, with some amount of vomit inbetween. I see no reason to separate off just one aspect and call that ``TeX'', and say that some feature doesn't exist because it isn't part of that aspect. I do agree with you, however, in wishing that the separation had never been made. To use an extreme analogy, I had to set up a database using a truly awful system in which queries were separated into several aspects. a) Input of values b) Search query which defines extent of selection, list/modify/delete/etc c) Output format d) Output destination A value from a) cannot be used to define a file name in b), or in d) as a destination filename. It can only be used in b) as a possible field value. c) is also predetermined, so although b) can use conditionals, the output cannot be displayed accordingly to reflect the outcome of those conditionals! The ``Dirty tricks'' chapter of The TeXBook feels similarly constrained. Mike Piff %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% Dr M J Piff %% e-mail: %% Department of Pure Mathematics %% %% University of Sheffield %% M.Piff@sheffield.ac.uk %% Hicks Building %% PM1MJP@derwent.shef.ac.uk %% Hounsfield Road %% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% SHEFFIELD S3 7RH %% Telephone: (0742) 824431 %% England %% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% ======================================================================== Date: Mon, 1 Nov 93 18:46:36 CET Reply-To: "NTS-L Distribution list" From: Michael Downes Subject: Re: what to implement in e-TeX In-Reply-To: <01H4N3SJIKRAO2ISY6@MATH.AMS.ORG> > For the moment suggestions for possible extensions are welcome. ... > Peter Breitenlohner Here are several extensions that I think are quite important, close to the current spirit of TeX, and not too hard to implement. (1) Rereading a character list to attach different catcodes. Currently this can only be done by writing the character list to an external file and rereading from the file. A way to avoid the overhead of using an external file would be preferable. Given an arbitrary token list, it should be possible to re-apply TeX's reading operations to the list, by doing something like dumping the tokens into TeX's input buffer, transformed into character codes similar to \meaning, and rereading them. Suggested command syntax, similar to \read: \reread\toks0 to\macroname with effect equivalent to \def\strip#1>{} \edef\temp{\the\toks0 } [open \outfile] \immediate\write\outfile{\expandafter\strip\meaning\temp} [close \outfile, open \infile] \read\infile to\macroname (2) \preparpenalty (or "\aboveparpenalty") added above \parskip. (3) Make a \predisplaypar like \par that is inserted automatically by TeX when it reads $$ in (nonrestricted) horizontal mode but can also be inserted explicitly. It should not reset \parshape, and it should give \predisplaysize, \displaywidth, and \displayindent their proper values. Then you would need a matching \enddisplay that would finish up a preceding paragraph, but not reset \parshape, and would leave you in horizontal mode. (4) Make the value of the last line of the previous paragraph available in a variable \prevwidth (like \prevdepth). This idea could be combined with \predisplaysize. (5) Catcode 7 should not serve two functions (math superscript and special notation ^^xx for input characters). One possibility would be to use category 15 for the latter instead of for `invalid characters'. Now that TeX reads eight-bit characters it is questionable whether making some character invalid by catcode 15 has any use. You can make the character category 13 (active) and define it to give an error message if you like. Or would it be possible to have a special catcode of 16 for ^ by some sort of loose analogy with \write16, \read16, \mathcode"8000, ... The use of the ^ character (ASCII 94) in TeX's representation of special characters for messages and tracing is hard-wired in the TeX program (or maybe the string pool?), unlike \escapechar. So maybe we need something like \escapechar for this too. (6) After making a box with {...} or \left ... \right Comments: Warning -- original Sender: tag was fx@DARESBURY.AC.UK From: Dave Love Subject: Re: what to implement in e-TeX >>>>> On Mon, 1 Nov 93 18:46:36 CET, Michael Downes said: Michael> (7) Make the current input file name available in \inputname (compare Michael> \jobname, \inputlineno). Ideally the directory (or `folder' or Michael> whatever) prefix should also be available separately, perhaps Michael> \inputarea. Yes, and make the line number and file name *writeable* for the benefit of people producing TeX by preprocessing (e.g. literate programmers) so you can debug in terms of the source file if the preprocessor inserts the information a la the C preprocessor. ======================================================================== Date: Mon, 1 Nov 93 15:10:04 -0500 Reply-To: "NTS-L Distribution list" From: "Michael D. Sofka" Subject: Re: what to implement in e-TeX >one approach would be a new primitive \newcolor, like \language, >limiting us to 255 colours. so i'd say \newcolor\green >and somehow associate that with some colour changing command. than >\TeX could associate all saved material with the current colour. What you describe would help in the case or producing output for a color printer. But, a lot of driver support would be needed to make use of this with color separations (separate negatives for each printer plate). The 255 colors with each character limited to a single color would only marginally helpful for process separation. That is, a color being split into its cyan, maganta, yellow and black componants. Michael D. Sofka mike@psarc.com Publication Services, Inc. ======================================================================== Date: Tue, 2 Nov 93 14:45:55 +1100 Reply-To: "NTS-L Distribution list" From: ecsgrt@LUXOR.LATROBE.EDU.AU Subject: Re: what to implement in e-TeX Joachim Schrod brought me down to earth with: % Sorry to say, but IMO your wishes have some fundamental flaws. I think % they imply that TeX is a statement oriented, imperative language. % It isn't. ... Does my background show? :-( % What is `garbage collection' in this circumstance? \new... isn't a TeX % primitive. You can write a \dispose... or \free... macro ... Oops! I think I used the wrong word. I only did one year of CS. What I had in mind was some way of filling the holes created by Mike Piff's example of (things like): \new a \new b \dispose a \new c % > 5. Boolean variables. % % What difference to \newif ? More flexible syntax. Just have one \if statement (modulo expansion), and test for (yes) logical and relational expressions. But if expressions are dead in the water, this idea is sunk. % TeX doesn't have strings. Only token lists. How about adding string registers? They would be dynamically allocated, for variable length. Btw, how are macro names stored? % > 11. A calculation mode, wherein `a * b' can be used for % > multiplication, for example. % % TeX doesn't know expressions, only assignments. (Besides, the % syntactical sugar you want has been implemented already.) No, I didn't want syntactic sugar, because that slows things down. What I wanted was a separate mode (\`a la math mode), in which input is differently interpreted. In `calculation' mode, `a * b' would cause an _efficient, internal_ arithmetic calculation, the result of which could be accessed in some way by the user. % > 12. Numeric variables, string variables, records, arrays, % > user-definable data types. % % TeX has no numbers. (And no strings, as said above.) In addition to the count registers mentioned by Mike Piff, I posit the dimension registers. If we cannot work with numeric constants (as we Algol, Pascal, C, and even CL, fans wish), then let us at least have greater access to the registers, and more operations on them. I think that TeX protects (?) too many of its internal data and operations from the users. % > 13. Access to catcode _after_ it's been read. % % After anything is read, it's a token. A category code is not an % attribute of a token. I'm suitably chastened. % (You might want to have access to the token % category, but that's something different.) Let's! How much (if any) access to the token category do we now have? This is the first intimation I've had that token categories existed as a separate class of objects, and his subsequent letter is the first time I've seen some of them named. % ... from a CS viewpoint TeX is a trivial language. Just as, from a formal perspective, the electromagnetic field is as simple as possible, and quantum effects introduce no new field equations. :-) It seems, from a user's eye view, that TeX is difficult because its semantics are not obvious, and its primitives are many, are related in complex ways, and it's not apparent why those particular primitives were chosen. It takes a lot more investment of effort on a user's part for TeX to begin to `make sense' than is the case for any imperative, functional, or macro, language that I've encountered. Not exactly what one expects from a `trivial' language. If the net of CS lets TeX slip through as `trivial', then I say its mesh is too coarse. :-( Hopefully, etex will at least reduce the incidence of hitting one's head against low ceilings. % One can't introduce things which go against the paradigm of a % language. If one wants them (and I fully agree with you that this is % the case), one has to _design_ a complete new language. I take Joachim's point that etex is not NTL. I took the wish list idea too far, for this context. I'll be more moderate, for now. :-) Geoffrey Tobin ======================================================================== Date: Tue, 2 Nov 93 15:20:51 +1100 Reply-To: "NTS-L Distribution list" From: ecsgrt@LUXOR.LATROBE.EDU.AU Subject: Re: what to implement in e-TeX barbara beeton typed: % geoffrey tobin, in his wish list, includes % % 18. Eliminate the `math axis' restrictions. ^^^^^^^^^^^^ I mean: keep the math axis, but if at all feasible, remove whatever it is about it in particular that stops PostScript fonts being used as TeX math fonts. I may be wrong, but I seem to recall articles in c.t.t. saying in c.t.t. that there's something in TeX's assumptions about the math axis that suits CM math fonts, but not PS math fonts. Can someone enlighten _me_ here? % while i'm sympathetic with geoffrey's request to up the size of % fonts to 64k, Whoa! My principal objection was to the hard-wired number `256'. (Used to be various hard-wired instances of `128' and `256'.) Why cannot a single TFM file have, say, 350 characters, as I think some PS fonts do (surreptitiously) ? Sorry, that's an NTL, not an etex, as I now realise. % i'm highly skeptical about the practicality of % this, on two counts: % % - the typeface styles used for, say, burmese, hangul, devanagari, % arabic, ..., are not at all parallel to those used for latin, % cyrillic, greek and other western alphabetic fonts. i doubt % seriously whether any reputable font supplier (west or east) % will provide such a hodge-podge in a single unit, and i would % be loath to use such a thing in normal circumstances. A single unicode font? Maybe, maybe not. However, a single Chinese font requires several thousand characters. Maybe putting it all into one TFM file is too low-level a solution. But it would be nice then to allow more text fonts. I'd also like to repeat my plea for many more math families. Say, infinitely many. Question: Is there any prospect of \mathchoice being made faster? Or replaced by something faster? % - space -- what would be the space requirements for just a % complement of serifed + sans-serif fonts in each of four styles % (upright, italic, bold, and bold italic) in, say, four sizes? I don't know what unicode bodes. Since we're talking about etex, I'll dodge the question of how much space a unicode PK file (or four or sixteen of those) would take, especially as I didn't want to make 64 K or 2^24 or any other astronomical number of characters _compulsory_. I only wanted etex to have no absolute limit, as the 256 barrier is already sometimes insufficient. Are the math-font-list people and other font designers reading this? If so, please correct me if I'm wrong on this point. Hmm, how about a design for TFM files to which TeX can apply affine transforms? Presently, TeX can scale uniformly. What about separate scales for horizontal and vertical directions ? Slanting ? (Interesting for italics.) Reflection? Rotation by 90 degrees? Rotation by arbitrary angles? (I'm not sure how to interpret arbitrary rotation of metrics in a typesetting context, but I decided to suspend moderation, in the hope that someone has a notion.) Geoffrey Tobin ======================================================================== Date: Tue, 2 Nov 93 09:53:34 GMT Reply-To: "NTS-L Distribution list" From: Sebastian Rahtz Subject: Re: what to implement in e-TeX In-Reply-To: <"leeman.yor.243:01.10.93.21.21.11"@york.ac.uk> NTS-L@DHDURZ1.EARN writes: > What you describe would help in the case or producing output for > a color printer. But, a lot of driver support would be needed to > make use of this with color separations (separate negatives for > each printer plate). The 255 colors with each character limited > to a single color would only marginally helpful for process separation. > That is, a color being split into its cyan, maganta, yellow and > black componants. there is a difference, surely, between the `composite' colours i use in my document, and the CMYK elements used to create them. I am doing a cookbook and i want the headings in pale blue. This is CMYK 0.5 0.3 0.2 0.1 (it isnt, but you know what i mean). i want to define \paleblue, mapped to the CMYK set, and have TeX keep that paleblue info around. so when i say \paleblue feeling blue \savebox{\foo}{something} still blue but now \red feeling aggressively red but want to show my blue box \usebox{\foo} i want the box \foo to retain its colour information as it retains its font information. in the end result, the dvi file says that some words should be set in the CMYK combination, and the driver sorts that out (as it must). i probably still haven't explained myself, because i don't understand the innards of TeX well enough. if someone describes to me how TeX retains font information when it stores stuff in a \box for later use, then I'll tell them i want the colour information taken too... Sebastian PS colour hounds might get a laugh from a colour separation header I wrote for dvips, in the contrib/ directory of the latest release. ======================================================================== Date: Tue, 2 Nov 93 09:57:58 GMT Reply-To: "NTS-L Distribution list" From: Martin Ward Subject: Re: what to implement in e-TeX > ... from a CS viewpoint TeX is a trivial language. I have heard about "the TeX engine" and "then TeX macro language" on this list recently. It seems to me that "the TeX engine" is an extraordinarily primitive language (with no expressions, conditionals, loops, etc. basically just sequencing and assignment to a fixed collection of registers). First point: what would be involved in removing all the limits on sizes of arrays, numbers of registers etc. etc. (for the TeX engine). First, you need a good "bignum" package (to allow unbounded sizes of integers). Then you need an "unbounded array" package (which automatically extends the size of an array as required while keeping the "fast random access" facility - perl arrays work like this). Second point: writing a parser/interpreter for a small imperative language (with conditionals, while loops, case statements, recursion, procedures and functions etc.) is now so trivial that it can be set as an undergraduate exercise :-). So what would be involved in extending "the TeX engine" to a small imperative language (perhaps with an "imperative mode" like maths mode, which interacts with "the rest of TeX" via the usual (now infinite) set of registers). Lots of languages have a macro processor built on top of an imperative "engine" - why not eTeX? Am I talking through my hat here? Can someone at least give a short description of "the TeX engine" (what are you left with after macro expansion, and how is this result processed by the engine). Martin. JANET: Martin.Ward@uk.ac.durham Internet (eg US): Martin.Ward@durham.ac.uk or if that fails: Martin.Ward%uk.ac.durham@nsfnet-relay.ac.uk or even: Martin.Ward%DURHAM.AC.UK@CUNYVM.CUNY.EDU BITNET: Martin.Ward%durham.ac.uk@UKACRL UUCP:...!uknet!durham!Martin.Ward ======================================================================== Date: Tue, 2 Nov 93 10:14:12 GMT Reply-To: "NTS-L Distribution list" From: Sebastian Rahtz Subject: Re: what to implement in e-TeX In-Reply-To: <"leeman.yor.119:01.10.93.17.42.06"@york.ac.uk> > (1) Rereading a character list to attach different catcodes. Currently > this can only be done by writing the character list to an external > file and rereading from the file. A way to avoid the overhead of one just looks at the \meaning, n'est ce pas, and rereads that? > (10) Shorten the key words for \vrule, \hrule: > > \vrule dp 5pt ht 3pt wd 2pt > > to reduce the consumption of main memory when these key words are used I *do* hope NTS wont go down this slippery slope of trivial optimisation which gives the normal user no benefit at all... sebastian ======================================================================== Date: Tue, 2 Nov 93 13:23:49 CET Reply-To: "NTS-L Distribution list" From: Peter Flynn Subject: Re: what to implement in e-TeX One thing that would be nice is a more robust metafont, one that won't suddenly start displaying graphics while building a font like emTeX's does. (BTW I've had zero replies to my post on this on c.t.t...how _do_ you stop it doing this? It does it on one machine and not on another, and there's no diff between the paths, env vars etc that I can see.) ///Peter ======================================================================== Date: Tue, 2 Nov 93 08:35:15 -0600 Reply-To: "NTS-L Distribution list" From: "Michael D. Sofka" Subject: Re: Color >there is a difference, surely, between the `composite' colours i use >in my document, and the CMYK elements used to create them. Well, yes there is in the sense that a symbol of an object and the object are not the same. But, it is very convienant from a color separation viewpoint to link them more then you propose. For instance, in the Adobe color separation model there are two types of colors These are called Process colors and Spot or Custom colors (The three terms are almost interchangable). Now, the system you propose is very handy for Custom colors. These are colors that correspond to a particular ink on a printer's plates. For example, if you have a book that uses two colors, black and some shade of blue, the shade of blue choosen will correspond to some standard ink. The most common such standard is called Pantone and are refered to as PMS colors. So you would ask for, maybe PMS 284 (A nice light shade of blue). Now, all TeX would need to do is keep track of what is in black, and what is in PMS 284. And, All the driver would need to do is be able to print only black (skipping PMS 284) for on plate, and print PMS 384 (skipping black) for another plate. From within PostScript, the colors would be represented via a combination of CMYK, but which combination depends on the printer for which the output is destined. When making plates, both colors would be black since it is up to the printer (the kind that run presses) to supply the an ink of suitable color. Process colors are different. Process colors ARE a combination of Cyan, Magenta, Yellow and Black and they are intended to be split into those componants on four plates. Each plate would, of course, be black since the printer will again supply the appropriate ink. So, when you say something is setting in Process 0.5 0.3 0.2 0.1 you are really saying that on the Cyan plate the tint s/b 50%, on the Magenta plate the tint s/b 30% and so on. In this case they do correspond to the values of CMYK. Now, differnet color PostScript printers will print this combination in different ways, but the book will not be printed on a QMS. Rather, it will be printed on a press with the colors mixed on the paper. (There are some rather peculiar requirements for lines per inch and screen angles, and specialized transfer functions to prevent moir\e patterns. This is why typesetting companies buy very expensive PostScript output devices for negatives.) The actual situation, as far as the driver is conserned, is made a little more complicated by the colors setting in overprint or knockout. An overprint color should print on top of any other characters, rules, screens set before it. A knockout color, on the other hand, should first remove the covered section before setting the new color. This effect is very important for providing what printers call trap. Trap is a small (typically 2 or 3 mills) area of overlap between adjacent colors. It is to prevent white lines from showing between color boundaries. Anyway, do you now see where your proposal is fine for QMS color printing, and even for simple 2-color books, but starts to crack at the edges for 4-color (and 6-color CMYK plus 2 Custom colors is not uncommon) book printing. What would be needed besides the color tags would be a way to label each character and rule (or each color, in which case the driver could do it) as Process or Custom and as Knockout or Overprint. Note: It is possible with the right driver to do all of this in TeX, but your example with text in a box would not work without extra Macro work or (as you propose) extending TeX to keep track of the current color. The only circumstance, however, where this would add something that macros could not ever add would be cases of paragraphs breaking in the middle of color with the resulting DVI pages being isolated or re-aranged. Michael D. Sofka INTERNET: mike@pubserv.com Publication Services, Inc. SPRINTNET: +1-518-456-5527 Albany Research Center. KEYHOLE11: 42 42' 16" N, 73 54' 43" W 102 Steuben Dr., #11 Guilderland, NY 12084, USA. This came directly from a computer and is not to be doubted or disbelieved. ======================================================================== Date: Tue, 2 Nov 93 17:30:56 MEZ Reply-To: "NTS-L Distribution list" From: Werner Lemberg Subject: Unicode support In-Reply-To: Message of Mon, 1 Nov 93 15:56:00 +0200 from I want to add some thoughts concerning Unicode. If we want to implement Unicode, I think it's quite useless to have one great font including all characters. Unicode has arranged the characters in pages of 256 characters each, which can be handled in an easy way by TeX if we increase the number of possible fonts. But now comes the real problem: How can we map Unicode into TeX fonts ? Until now I found two topics which must be solved: 1) grouping of character sets extending the 256 character barrier (Chinese, Hieroglyphs etc.) 2) the "Korean problem": It's possible to construct all Korean characters (I think more than 2000, but I'm not sure at the moment) with only TWO (256 character) TeX-fonts ! IMO, these problems could be solved by using an equivalent of a virtual font file or mapping table (but read at IniTeX time). For example, if TeX encounters a U+E37A Unicode character (this is a Chinese character), it would look into a mapping table which TeX font should be used, then adding the font properties (like in the NFSS system) to get the right font name. If TeX reads a Korean character, it would look into another mapping table which primitives should be substituted (again, constructing the font name like the NFSS system). Werner ======================================================================== Date: Tue, 2 Nov 93 23:40:39 EDT Reply-To: "NTS-L Distribution list" From: Jerry Leichter Subject: re: Color Michael Sofka points out the complexities in specifying color. Let's step back for a moment and understand what's going on here. Currently in DVI files, two kinds of objects actually do typesetting: Charac- ters and rules. Rules are simple, fixed objects. Characters are interpreted with respect to an implicit environment, the current font; they have no meaning otherwise. Specifications such as color amount to new, orthogonal components of the implicit environment. Unlike fonts, they presumably affect rules as well as characters; but otherwise, they have the same properties that one currently associates with fonts. Rather than worrying about colors specifically, suppose TeX were extended to allow the specification of an arbitrary number of "properties". Think of these properties as analogous to \fontdimen's: They have no *intrinsic* meaning, but are assigned a conventional meaning by the Metafont programs that create the fonts, TeX itself, and the macros used by TeX. Similarly, properties 1-4 might be, by convention, CYMK values. Property 5 might be 0 for knockout color, 1 for overstrike. And so on. There's no reason to decide up front what all the properties that anyone might ever find useful would be. It's sufficient to standardize a list that covers those possibilities that are currently understood, and let "the market" come up with any others that might be needed. A convention like "positive property numbers will be standardized; don't use any not on the most recent list. Use negative propery numbers for local extensions" would work quite well. All that said - *you don't need NTS to do this*! \special's will do it quite adequately. In fact, they will do it more readably. How about \special{prop CYMK 0.5 0.2 0.3 0.3}, for example. Conventions for use of such \special's would have to be developed, but that's long overdue anyway. Just what would "properties", either in the general form suggested above or in the special form of pre-defined "color" properties add that can't be done today? It's not as if anyone expects TeX to, for example, somehow "mix" differ color specifications - specifying blue letters in a yellow box selects green print? At least I *hope* no one expects something like that. -- Jerry ======================================================================== Date: Wed, 3 Nov 93 08:29:19 -0600 Reply-To: "NTS-L Distribution list" From: "Michael D. Sofka" Subject: re: Color >All that said - *you don't need NTS to do this*! \special's will do it >quite adequately. In fact, they will do it more readably. How about >\special{prop CYMK 0.5 0.2 0.3 0.3}, for example. Conventions for use of >such \special's would have to be developed, but that's long overdue anyway. I agree, and it works very nicely. We don't use general properties. Rather, the colors we specify do have properties such as being process of custom, and knockout or overprint. >Just what would "properties", either in the general form suggested above or >in the special form of pre-defined "color" properties add that can't be done >today? It's not as if anyone expects TeX to, for example, somehow "mix" >differ color specifications - specifying blue letters in a yellow box selects >green print? At least I *hope* no one expects something like that. We do exactly that sort of thing, and many things more complex. Try blended screens with knockout text that has trapping. Or, text over bitmaps. If you are doing 4-color work pandora's box opens wide for what a designer may do. There is one case where having more direct support in TeX would be helpful. Let us imagine a paragraph with one sentence that is set in red. This means that at be beginning of the paragraph the current color is black, during the sentence it is red, and after the sentence it is black again. Now, imagen that TeX breaks the page in the middle of the sentence. There is no easy way in TeX to establish that the color at the beginning of the page is red, and not black. If I recall correctly, dvips handles this case by evaluating the current color at the beginning of each page in the DVI file. This is also what our driver does. But, this does not work if the DVI pages are considered in isolation, or if the pages are re-arranged. Why would you re-arrange the pages? Well, printing impositions is one reason. This too is a very common request for 4-color work since it saves a lot of time not having to paste-up and register 4 or more plates. What this comes down too is that DVI pages are independent (if you read the fonts from the end of the file) unless you are using specials. Mike Michael D. Sofka INTERNET: mike@pubserv.com Publication Services, Inc. SPRINTNET: +1-518-456-5527 Albany Research Center. KEYHOLE11: 42 42' 16" N, 73 54' 43" W 102 Steuben Dr., #11 Guilderland, NY 12084, USA. This came directly from a computer and is not to be doubted or disbelieved. ======================================================================== Date: Wed, 3 Nov 93 18:02:19 CET Reply-To: "NTS-L Distribution list" From: Joachim Schrod Subject: Re: Color In-Reply-To: <199311031645.AA33671@rs3.hrz.th-darmstadt.de> from "Michael D. Sofka" at Nov 3, 93 08:29:19 am Michael D. Sofka wrote: > > There is one case where having more direct support in TeX would be > helpful. Let us imagine a paragraph with one sentence that is set in > red. This means that at be beginning of the paragraph the current > color is black, during the sentence it is red, and after the sentence > it is black again. Now, imagen that TeX breaks the page in the middle > of the sentence. There is no easy way in TeX to establish that the > color at the beginning of the page is red, and not black. Change the output routine to add a color special at the start of each page. You might want to optimize for changes to a default color, i.e., omit the special then. If your style file just appends this code to the expansion of \@outputpage, you'll not have to be afraid of changes in LaTeX, too. (As long, as LaTeX uses \@outputpage. ;-) You might have to check for the last page, 'though -- but that should be easy as well. Another point where care is needed, are pagefloats. They will most probably be in another color. But the macros won't get too long. -- Joachim =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Joachim Schrod Email: schrod@iti.informatik.th-darmstadt.de Computer Science Department Technical University of Darmstadt, Germany ======================================================================== Date: Wed, 3 Nov 93 21:38:44 CET Reply-To: "NTS-L Distribution list" From: Joachim Schrod Subject: Re: what to implement in e-TeX In-Reply-To: <199311021157.AA32619@rs3.hrz.th-darmstadt.de> from "Sebastian Rahtz" at Nov 2, 93 10:14:12 am sebastian wrote: > > > (1) Rereading a character list to attach different catcodes. Currently > > this can only be done by writing the character list to an external > > file and rereading from the file. A way to avoid the overhead of > one just looks at the \meaning, n'est ce pas, and rereads that? The expansion of \meaning is a token list, not a character stream. You've lost... Btw, already the note (1) is wrong. One cannot write a character list to an external file. One can only write a token list, at the time of a \write there are no characters any more. The write primitive -- which is usually bound to the token '(cseq . "write") -- will transform this into a character stream again. (I think I'm getting annoying, but it remains important... ;-) -- Joachim =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Joachim Schrod Email: schrod@iti.informatik.th-darmstadt.de Computer Science Department Technical University of Darmstadt, Germany ======================================================================== Date: Wed, 3 Nov 93 14:11:33 -0600 Reply-To: "NTS-L Distribution list" From: "Michael D. Sofka" Subject: Re: Color >> There is one case where having more direct support in TeX would be >> helpful. Let us imagine a paragraph with one sentence that is set in >> red. This means that at be beginning of the paragraph the current >> color is black, during the sentence it is red, and after the sentence >> it is black again. Now, imagen that TeX breaks the page in the middle >> of the sentence. There is no easy way in TeX to establish that the >> color at the beginning of the page is red, and not black. >Change the output routine to add a color special at the start of each >page. You might want to optimize for changes to a default color, i.e., >omit the special then. This doesn't work in the above case. Note, the optimal break point is in the middle of red, but the point is not checked for until the color has been set back to black. The result is that \output{} sets the color at the top of the page to black, which is wrong since the first paragraph s/b red. The only thing that may get around this is to make an \everyline{} macro by setting \linepenalty=-10001 and having \output{} take this as a signal that we are in a color change. Then, \output{} checks each line to see if it would fit on the current page (with a new penalty of \oldlinepenalty). If the ideal break does turn out to be in the red section, then a special setting red can be placed at the top of the new page. Otherwise, we go back into non-everyline mode. Mike Michael D. Sofka INTERNET: mike@pubserv.com Publication Services, Inc. SPRINTNET: +1-518-456-5527 Albany Research Center. KEYHOLE11: 42 42' 16" N, 73 54' 43" W 102 Steuben Dr., #11 Guilderland, NY 12084, USA. This came directly from a computer and is not to be doubted or disbelieved. ======================================================================== Date: Wed, 3 Nov 93 22:09:40 CET Reply-To: "NTS-L Distribution list" From: Joachim Schrod Subject: Re: what to implement in e-TeX In-Reply-To: <199311020348.AA26894@rs3.hrz.th-darmstadt.de> from"ecsgrt@LUXOR.LATROBE.EDU.AU" at Nov 2, 93 02:45:55 pm Geoffrey Tobin wrote: > > Joachim Schrod brought me down to earth with: > > % TeX doesn't have strings. Only token lists. > > How about adding string registers? > > % TeX has no numbers. (And no strings, as said above.) > > In addition to the count registers mentioned by Mike Piff, I posit > the dimension registers. If we cannot work with numeric constants > (as we Algol, Pascal, C, and even CL, fans wish), then let us at least > have greater access to the registers, and more operations on them. No -- I fully agree with your point. We *need* numbers, strings, user-definable data, etc. But -- this is not the programming language TeX any more, which we use (and hate ;-) E.g., tricks like \def\3#1{\hfil\penalty#10\hfilneg} % optional break within a statement (from webmac.tex) would not be possible any more. I will surely trade it for that! > I think that TeX protects (?) too many of its internal data and > operations from the users. Hides, I would say. IMAO it's fatal design flaw that one doesn't have the basic primitives one would await for the basic structures of a language. E.g., personally I would trade \expandafter any time for the possibility to say (define-tex-fun expandafter (tok1 tok2) "Expands to the the list made by tok1 and the first-level expansion of tok2." (cons tok1 (expand tok2))) where I assume that a `tex-fun' is something which takes $n$ argument tokens from the input token stream and replaces them by a token list (which might be empty). [For those among us who don't know lisp: `tok1' and `tok2' are arguments, `cons' means prepend to a list, `expand' is assumed to deliver the binding of the token `tok2'.] [For those among us who know TeX: I'm aware that this isn't a complete definition. It shall not be, it's an impression. :-) ] > % (You might want to have access to the token > % category, but that's something different.) > > Let's! How much (if any) access to the token category do we now have? > This is the first intimation I've had that token categories existed as > a separate class of objects, and his subsequent letter is the first > time I've seen some of them named. The access is over the control sequence \ifcat. The term I used in the subsequent letter (`token type') is a better one than `token category', it will not be confounded so easily. The distinction can be looked at best if you regard control sequences. Stop -- this is a bad term, too. Let's name them symbols, a symbol is something I can bind a token list on. There exist three kinds of symbols tokens: (symbol . ?~) is an active character (here `~') (symbol . "}") is a control sequence (here `\}') (symbol . "foo") is also a control sequence (here `\foo') Note, that (1) active characters and control sequences have the same token type. Run, eg, \ifcat ~\relax \immediate\message{true} \else \immediate\message{false} \fi through TeX. (2) This whole business about control words and control characters (i.e., \foo vs. \}) is something the lexical analysis handles. On the token level, this distinction is gone. To wit: \catcode`\(=12 \def\({macro 1} \catcode`\(=11 \def\({macro 2} \catcode`\(=12 \show\( I came to this notion when I had to build a model of the language, it isn't described anywhere. (Once upon a time, I gave courses on such things... :-) > % ... from a CS viewpoint TeX is a trivial language. > > It seems, from a user's eye view, that TeX is difficult because its > semantics are not obvious, Oh, you misunderstood me. That TeX is a trivial language does not imply that it's easy to use (or that I think this is good so). It simply tells that only very few constructs (data- and construct-wise) are used. My opinion is quite the contrary, the TeX programmer has a much to large gap between his mental model (e.g., numbers) and the mechanisms of the language (i.e., token lists). I curse myself often enough. > and its primitives are many, are related in > complex ways, and it's not apparent why those particular primitives > were chosen. This is one of the reasons why I think it's important to distinguish the TeX language from the TeX engine: The former is quite regular and can be explained much better. -- Joachim =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Joachim Schrod Email: schrod@iti.informatik.th-darmstadt.de Computer Science Department Technical University of Darmstadt, Germany ======================================================================== Date: Wed, 3 Nov 93 22:11:43 CET Reply-To: "NTS-L Distribution list" From: Joachim Schrod Subject: Re: Color In-Reply-To: <199311032101.AA36748@rs3.hrz.th-darmstadt.de> from "Michael D. Sofka" at Nov 3, 93 02:11:33 pm You wrote: > > >> There is one case where having more direct support in TeX would be > >> helpful. Let us imagine a paragraph with one sentence that is set in > >> red. This means that at be beginning of the paragraph the current > >> color is black, during the sentence it is red, and after the sentence > >> it is black again. Now, imagen that TeX breaks the page in the middle > >> of the sentence. There is no easy way in TeX to establish that the > >> color at the beginning of the page is red, and not black. > > >Change the output routine to add a color special at the start of each > >page. You might want to optimize for changes to a default color, i.e., > >omit the special then. > > This doesn't work in the above case. Note, the optimal break point is > in the middle of red, but the point is not checked for until the color > has been set back to black. You have to use \mark, of course. The TeX book explains how to put things in marks without setting them later -- one should be possible to transfer this to a LaTeX pagestyle in a straight forward way. I just wanted to point out where the handling of the marks can be hooked in. -- Joachim =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Joachim Schrod Email: schrod@iti.informatik.th-darmstadt.de Computer Science Department Technical University of Darmstadt, Germany ======================================================================== Date: Thu, 4 Nov 93 03:36:33 (GMT) Reply-To: "NTS-L Distribution list" From: Timothy Murphy Subject: Re: Color In-Reply-To: <01H4W5POKXH200PFUY@mailgate.ucd.ie> from "Joachim Schrod" at Nov 3, 93 10:11:43 pm Could someone please explain to me how colour changes differ from any other font change. Timothy Murphy ======================================================================== Date: Thu, 4 Nov 93 10:09:06 +0100 Reply-To: "NTS-L Distribution list" From: Piet van Oostrum Subject: Re: Color In-Reply-To: <199311032114.AA16121@infix.cs.ruu.nl> >>>>> Joachim Schrod (JS) writes: JS> You have to use \mark, of course. The TeX book explains how to put JS> things in marks without setting them later -- one should be possible JS> to transfer this to a LaTeX pagestyle in a straight forward way. JS> I just wanted to point out where the handling of the marks can be JS> hooked in. You must then also use aftergroup to generate a mark when the surrounding color is restored, but I am not sure that this solves all problems. Piet van Oostrum ======================================================================== Date: Thu, 4 Nov 93 09:12:21 LCL Reply-To: Mike Piff From: Mike Piff Subject: Re: Color %>Date: Thu, 4 Nov 93 03:36:33 (GMT) %>Reply-to: NTS-L Distribution list %>From: Timothy Murphy %>Subject: Re: Color %>To: Multiple Recipients of %> %> %>Could someone please explain to me %>how colour changes differ from any other font change. %> %>Timothy Murphy %> %> %> At last, a spark of sense in this discussion! I agree with Tim. The only difference between red Times Roman and green Times Roman is that the drivers select a red font rather than a green one. TeX need know nothing about this. All it needs is the shape of the boxes surrounding the characters. Now, if we are saying that each font should have various orthogonal attributes, such as slant, weight, colour, etc, this can be handled either in the way LaTeX does using NFSS or by completely rewriting TeX---and METAFONT too presumably---so that a slant, etc, can arbitrarily be applied to any font on the fly. Mike Piff %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% Dr M J Piff %% e-mail: %% School of Mathematics and Statistics %% %% University of Sheffield %% M.Piff@sheffield.ac.uk %% Hicks Building %% %% SHEFFIELD S3 7RH %% Telephone: (0742) 824431 %% England %% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% ======================================================================== Date: Thu, 4 Nov 93 10:17:13 GMT Reply-To: "NTS-L Distribution list" From: David Carlisle Subject: Re: Color In-Reply-To: <9311040916.AA08574@m1.cs.man.ac.uk> (message from Mike Piff on Thu, 4 Nov 93 09:12:21 LCL) Mike Piff says: %> %>Could someone please explain to me %>how colour changes differ from any other font change. %> %>Timothy Murphy %> %> %> At last, a spark of sense in this discussion! I agree with Tim. The only difference between red Times Roman and green Times Roman is that the drivers select a red font rather than a green one. TeX need know nothing about this. All it needs is the shape of the boxes surrounding the characters. As has ben explained by earlier posters, the difference is that TeX does not support colour internally. If you go \setbox0=\vbox{\rm roman text ..... } \setbox2=\vsplit0 to 1in then both box0 and box2 have at the primitive level the information that they contain cmr10. however if you go \setbox0=\vbox{\green\rm roman text ..... } \setbox2=\vsplit0 to 1in where \green just puts in a \special, then after the split, box0 has lost the information that it is containing green text. As has been explained, you can work round this to some extent using \mark and possibly \aftergroup, but getting it right (and integrating the required changes into a format the size of LateX) is non-trivial. If you think that you never use \vsplit, so this isnt a problem for you, think `page break' instead of vsplit. David ======================================================================== Date: Thu, 4 Nov 93 11:45:15 +0100 Reply-To: "NTS-L Distribution list" From: Anselm Lingnau Subject: Re: Color In-Reply-To: (Your message of Thu, 04 Nov 93 10:17:13 GMT.) <9311041025.AA18637@gauss.math.uni-frankfurt.de> Mike Piff said > The only > difference between red Times Roman and green Times Roman is that the drivers > select a red font rather than a green one. TeX need know nothing about this. > All it needs is the shape of the boxes surrounding the characters. and David Carlisle answered: > As has ben explained by earlier posters, the difference is that TeX > does not support colour internally. > > If you go > > \setbox0=\vbox{\rm roman text ..... } > > \setbox2=\vsplit0 to 1in > > then both box0 and box2 have at the primitive level the information > that they contain cmr10. There is obviously a difference in degree between being able to use, say, two or three different colours in a document and doing multicoloured maths or smoothly rainbow-coloured titles. Suppose for the moment that we won't be using more than a couple different colours for a few fonts (say five fonts in at most three colours each). Can anybody explain to me why it wouldn't be possible to put the colour information into a virtual font and let the driver worry about figuring out which colour to use for which glyph? This would solve the `\vsplit' problem above since the colour information is available on the glyph level. Of course this doesn't work for rules because they don't have a font associated with them. I suppose one could use a `\special' mechanism for rules since rules can't be split, anyhow. Anselm --- Anselm Lingnau .................................. lingnau@math.uni-frankfurt.de The viability of standards is inversely proportional to the number of people on the committee. --- James Warner ======================================================================== Date: Thu, 4 Nov 93 11:53:45 CET Reply-To: "NTS-L Distribution list" From: Peter Flynn Subject: Re: Color > Could someone please explain to me > how colour changes differ from any other font change. > > Timothy Murphy Yes. They're colour changes, not font changes. IMHO, colour is merely an attribute of a font, so should be treated in an agglutinative manner in both directions, eg \bf\red should get you boldface first, then make it red. The next font change (either a } or eg \it) should also terminate red and revert to whatever was there before. \red\bf applies global red first, then shifts to boldface. Ending boldface (by a } or another font directive) would not terminate the red. But I'm happy to be redirected if this is wrong for some reason. ///Peter ======================================================================== Date: Thu, 4 Nov 93 12:00:44 CET Reply-To: "NTS-L Distribution list" From: Joachim Schrod Subject: Re: Color In-Reply-To: <199311040910.AA33761@rs3.hrz.th-darmstadt.de> from "Piet van Oostrum" at Nov 4, 93 10:09:06 am You wrote: > > >>>>> Joachim Schrod (JS) writes: > > JS> You have to use \mark, of course. The TeX book explains how to put > JS> things in marks without setting them later -- one should be possible > JS> to transfer this to a LaTeX pagestyle in a straight forward way. > JS> I just wanted to point out where the handling of the marks can be > JS> hooked in. > > You must then also use aftergroup to generate a mark when the surrounding > color is restored, but I am not sure that this solves all problems. ARGH. I should have been quiet. I don't think that {\green \it Text} is a good way to enter green italic text. If you don't want to go crazy in the implementation, colours might not be treated the same way as fonts. E.g., \begin{green} {\it Text} \end{green} Now you can setup specials _and_ marks at the start and at the end of the environment. One has to take care that marks will be still used for headlines (is this the correct English term? ``Living column titles'' would be the direct translation...). One has to hide the colour information with the \if ... \else \fi trick mentioned in the TeX book. Of course, if you can persuade your users to type \begin{green} \begin{it} Text \end{it} \end{green} you're orthogonal again. :-) -- Joachim =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Joachim Schrod Email: schrod@iti.informatik.th-darmstadt.de Computer Science Department Technical University of Darmstadt, Germany ======================================================================== Date: Thu, 4 Nov 93 12:31:24 +0100 Reply-To: "NTS-L Distribution list" From: Reino de Boer Subject: Re: Color > Date: Thu, 4 Nov 93 11:53:45 CET > From: Peter Flynn > Yes. They're colour changes, not font changes. IMHO, colour is merely > an attribute of a font, so should be treated in an agglutinative manner > in both directions, eg > > \bf\red should get you boldface first, then make it red. The > next font change (either a } or eg \it) should also terminate > red and revert to whatever was there before. > > \red\bf applies global red first, then shifts to boldface. Ending > boldface (by a } or another font directive) would not terminate > the red. If colour would merely be an attribute of a font, then \bf\red bold reddish \it bold italic reddish would have to be the result of your first example, and there would be no difference between \bf\red and \red\bf. This change could be incorporated through macros and the addition of colored fonts. The macros aren't all that difficult, it's the fonts that make it hard. There was a suggestion using virtual fonts. I don't know that much about virtual fonts, can anyone explain if this would be possible ? Another option would be to convince METAFONT that pixels have a color, instead of just being on or off. Imagine multicolored glyphs :-) Reino -- Reino R. A. de Boer CS Dept, Faculty of Economics, Erasmus University Rotterdam email: sysrb@cs.few.eur.nl There exists a way to compile TeX (Knuth 1985). ======================================================================== Date: Thu, 4 Nov 93 14:29:37 CET Reply-To: "NTS-L Distribution list" From: Joachim Schrod Subject: Re: what to implement in e-TeX In-Reply-To: <199311021008.AA37068@rs3.hrz.th-darmstadt.de> from "Martin Ward" at Nov 2, 93 09:57:58 am Martin Ward wrote: > > I have heard about "the TeX engine" and "then TeX macro language" on > this list recently. It seems to me that "the TeX engine" is an > extraordinarily primitive language (with no expressions, conditionals, loops, > etc. basically just sequencing and assignment to a fixed collection of > registers). I would say: Not a `primitive language', but a primitive (abstract) computer. > First point: what would be involved in removing all the limits on sizes > of arrays, numbers of registers etc. etc. (for the TeX engine). First, you > need a good "bignum" package (to allow unbounded sizes of integers). Then > you need an "unbounded array" package (which automatically extends the size > of an array as required while keeping the "fast random access" facility - perl > arrays work like this). I don't know what would be involved -- but such work might convince me to switch to eTeX... In fact, I think that the `dynamization' of TeX would not go outside the limit of the system we have now -- IMO these are system dependent parts. > Second point: writing a parser/interpreter for a small imperative language > (with conditionals, while loops, case statements, recursion, procedures > and functions etc.) is now so trivial that it can be set as an undergraduate > exercise :-). So what would be involved in extending "the TeX engine" to > a small imperative language (perhaps with an "imperative mode" like maths > mode, which interacts with "the rest of TeX" via the usual (now infinite) > set of registers). As I wrote yesterday -- why do yo restrict yourself to one language? > Can someone at least give a short description of "the TeX engine" > (what are you left with after macro expansion, and how is this result > processed by the engine). Please mail answers to this list, or make it available by anonymous ftp. I would be interested as well -- since I can't see a _short_ one. The TeX engine with all it's unregular primitives is baroque. For me, it seems to be easier to define the TeX language formally. (But then, formal language definitions is my research area -- so I may be biased. ;-) -- Joachim =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Joachim Schrod Email: schrod@iti.informatik.th-darmstadt.de Computer Science Department Technical University of Darmstadt, Germany ======================================================================== Date: Thu, 4 Nov 93 14:04:50 GMT Reply-To: "NTS-L Distribution list" From: David Carlisle Subject: Re: Color In-Reply-To: <9311041232.AB16105@m1.cs.man.ac.uk> (message from Peter Flynn on Thu, 4 Nov 93 11:53:45 CET) >>>>> "Peter" == Peter Flynn writes: >> Could someone please explain to me how colour changes differ from >> any other font change. >> >> Timothy Murphy Peter> Yes. They're colour changes, not font changes. IMHO, colour is Peter> merely an attribute of a font, so should be treated in an Peter> agglutinative manner in both directions, eg Peter> \bf\red should get you boldface first, then make it Peter> red. The next font change (either a } or eg \it) should also Peter> terminate red and revert to whatever was there before. Peter> \red\bf applies global red first, then shifts to Peter> boldface. Ending boldface (by a } or another font directive) Peter> would not terminate the red. Peter> But I'm happy to be redirected if this is wrong for some Peter> reason. Peter> ///Peter Not necessarily wrong, but you seem to be asking for a rather wierd syntax. Firstly, as usually implemented in TeX macros, colour is not an attribute of the current font, for instance it also affects rules. but more specifically you say \bf\red should attach `red' as a property to the current font, so redness will be lost at the next font change, whether explict, or at a group end. Fair enough, if that is what you want, but then I can not see why \red\bf would not first attach redness to whatever was the current font, and then discard this at the next font change, the \bf. What is the syntax that makes this a `global' change. David ======================================================================== Date: Thu, 4 Nov 93 08:22:28 -0600 Reply-To: "NTS-L Distribution list" From: "Michael D. Sofka" Subject: Re: Color Well, its morning here again and my nts box is flowing over :-) >> This doesn't work in the above case. Note, the optimal break point is >> in the middle of red, but the point is not checked for until the color >> has been set back to black. >You have to use \mark, of course. The TeX book explains how to put >things in marks without setting them later -- one should be possible >to transfer this to a LaTeX pagestyle in a straight forward way. > I just wanted to point out where the handling of the marks can be >hooked in. Someone else suggested this to me (Art Ogawa?) and it just got filed away. I think it could be made to work for the case I gave. With the exception of mid-paragraph changes maintaining the current color across page boundaries is fairly easy. Regarding all of this discussion about virtual fonts and colors and \bf\red etc it won't work (too strong, substitute "is not practical"). Here are the reasons why: 1. Fonts are not the only thing set in color. TeX also has rules and these would have to be affected by color changes as well. If you already have to build in color specials for rules you might as well use them for fonts. 2. If you are serious about doing professional color work you WILL have to also use specials to set screens. Even in simple 2-color books the design will call for tinted elements, or screened area's (the red regions with type---check out the current crop of text books at the local university.) You will need speicals for these and they will also have to be affected by color. 3. Reread the description of process colors (Cyan Magenta Yellow Black separations color) and Custom colors (specific color (like buying paint for your house) applied to the plate). 4. Reread the description of knockout and overprint (and the relation to trapping). 5. Colors are not orthoganal. There are cases where elements appear in more then one color. The most common case is when some element must appear in all colors. This happens on every page of a color book for crop marks (marking the area of the physical page), taglines (identifying information about the page) and registration marks (used to help the printer line up the color separations). The other common case is process colors. Now, I state again, all of this can be done with TeX using specials. You don't need NTS/e-TeX for color, but if you are going to add color to a typesetting system check out Adobe Illustrator or Quark Xpress or other programs actually used to do professional color work. Better yet, visit a print shop or typesetter who does color work. (Joachim, If you and Christine are ever in Champaign, Illinois let me know and I'll give you a tour.) Michael D. Sofka INTERNET: mike@pubserv.com Publication Services, Inc. SPRINTNET: +1-518-456-5527 Albany Research Center. KEYHOLE11: 42 42' 16" N, 73 54' 43" W 102 Steuben Dr., #11 Guilderland, NY 12084, USA. This came directly from a computer and is not to be doubted or disbelieved. ======================================================================== Date: Thu, 4 Nov 93 16:13:11 +0000 Reply-To: "NTS-L Distribution list" From: Robin.Fairbairns@CL.CAM.AC.UK Subject: Re: Color In-Reply-To: Mike Piff's message of "Thu, 04 Nov 93 09:12:21 -1100." <"swan.cl.c Mike Piff supports Timothy Murphy in suggesting that colour in TeX need only be a property of a font. The original suggestion, however, was to allow coloured rules; restricting colour to fonts would eliminate that possibility. I'm also uneasy about using \special to select colour (though it's obviously the only way to go until we get an extension under way); I would far sooner have it being an attribute that was groupable, in the way that (dare I say it) font selections are. -- Robin (Campaign for Real Radio 3) Fairbairns rf@cl.cam.ac.uk U of Cambridge Computer Lab, Pembroke St, Cambridge CB2 3QG, UK ======================================================================== Date: Thu, 4 Nov 93 10:59:22 +0100 Reply-To: Mike Piff From: Mike Piff Subject: re: colour Could we have a primitive to change the colour and texture of the paper TeX is printed on? %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% Dr M J Piff %% e-mail: %% School of Mathematics and Statistics %% %% University of Sheffield %% M.Piff@sheffield.ac.uk %% Hicks Building %% %% SHEFFIELD S3 7RH %% Telephone: (0742) 824431 %% England %% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% ======================================================================== Date: Thu, 4 Nov 93 18:52:07 +0100 Reply-To: "NTS-L Distribution list" From: KNAPPEN@VKPMZD.KPH.UNI-MAINZ.DE Subject: Colourful glyphs I like the idea of coloured glyphs -- allthough this would call for changes in METAFONT and the pk file format. The availability of explicitly coloured glyphs would allow the import of colour graphics as a font into TeX and the dvi file. (I admit, that the writing of a good driver capable of coloured fonts is a non-trivial task.) However, METAFONT is only able to send specials to the gf file; it cannot send them -- if I'm not mistaken -- to the tfm file nor are specials propagated to the pk-file by gftopk (if the pk format is able to include specials at all). Can someone more experienced with the more esoteric aspects of METAFONT comment the possiblity of creating coloured gf files (Of course meaning: two or more colours in one glyph) ? --J"org Knappen. ======================================================================== Date: Thu, 4 Nov 93 18:48:35 +0100 Reply-To: "NTS-L Distribution list" From: KNAPPEN@VKPMZD.KPH.UNI-MAINZ.DE Subject: 16 bit eTeX Idea on 16 bit TeX This idea is a rather conservative one, because it is designed to work with standard tfm files and produces a standard dvi file as output. It proposes only few new primitives, allthough the internal changes to the TeX programme may be rather radical. Imho, _one_ executable should be able to handle 8 bit input and 16 bit input, to be switched by a qualifier or flag. I'm not sure, whether the input should be switchable inside NTS. A pro is surely, that with such a switch old 8 bit files can be integrated into a 16 bit document. However, here it goes: The model primitve of TeX is \mathchardef . The problem is, that it assigns _one_ number, containing different kinds of information. You can even obscure this information by giving the number in decimal instead of hex notation. The limitation to 16 math font families manifests itself in this one number. Therefore, \emathchardef = :: where the three parts of information are put into three delimited numbers. The range of these three numbers can consequently be freed from its current restrictions. For text, apply the idea of \textcode, here in its extended form: \etextcode = :::: where I propose for the meaning of the numbers the following: directionality (none implied/ left-right/ right-left/ top-bottom/ ...) text uppercase code (see below) text lowercase code fontencoding (some kind of textfam) location in font. Comment on numbers 2 and 3: I want to have more decent \uctext and \lctext primitives, which -- only upper- or lowercase textmode stuff, but not math, active characters, etc. -- avoid other difficulties of the \uppercase primitive, like the counterintuitive placement in \if's. --J"org Knappen. P.S. Joachim, please pardon my sloppy terminology. It's purpose is to illustrate what I mean, not be be excact in CS sense. P.P.S. Thanks to Bernd Raichle and Werner Lemberg for their stimulating postings on this thread. ======================================================================== Date: Fri, 5 Nov 93 18:21:51 +1100 Reply-To: "NTS-L Distribution list" From: ecsgrt@LUXOR.LATROBE.EDU.AU Subject: Re: Colourful glyphs Re colored glyphs, there are good and bad points. 1. PK files will be much bigger. Imagine 24 bit JPEG pixel heights, instead of the current 1 bit high B&W. 2. Grey tones would allow a modified MF to do anti-aliasing. 3. Every time we want a color variation, we'd have to regenerate the font. The number of possible colorings is exponential in the font resolution. Color glyphs seem, at first glance, to imply a big change in the specifications of MF, GF and PK. Alternatively, color could be `encoded' in an array of B&W pixels; then a color DVI driver could read the colors, on the assumption that the PK file was a color one. More robustly: PK specials could specify whether the next character is B&W, monochrome, or color, and how many bits describe its grey levels. The absence of a PK special before a character would mean the default interpretation: a B&W character. ASIDE: much of the present suffering is due to the physical separation of TeX from Metafont (for which I currently see no acceptable remedy). J"org Knappen said: % However, METAFONT is only able to send specials to the gf file; it cannot % send them -- if I'm not mistaken -- to the tfm file nor are specials % propagated to the pk-file by gftopk (if the pk format is able to include % specials at all). According to the DVI Driver Standard, Level 0, Draft 0.5 : 1. The PK design has eight commands, four being for {\bf special}. These must NOT occur _inside_ characters, but they can occur _between_ characters. 2. METAFONT generates two of these: the special command for a string of up to 2^8 = 256 bytes, and the one for a string of up to 2^24 = sixteen Megabytes. 3. TFM has no specials. 4. If header bytes 18 onwards are present, then information might be stored there, but I don't know whether the current TeX reads them. Perhaps e-TeX would? (Nudge, nudge.) 5. TFM files are limited, by the `lf' integer, to (32K-1)*4 = 131068 bytes. 6. TFM's current design would allow up to 32767 font parameters (each of 4 bytes), were it not for the `lf' restriction, and certainly it allows many more than currently used by TeX. (I'm too lazy to calculate the exact limit, here and now.) 7. With the same proviso, TFM also allows up to 32767 characters, with any number of distinct heights, depths and italic corrections. J"org asked: % Can someone more experienced with the more esoteric aspects of % METAFONT comment the possiblity of creating [multi-]coloured % [glyphs] ? How one gets the _color information_ into the GF files, is an interesting question! I don't see how we can avoid changing METAFONT, that is, designing another program that writes TFM and GF files. On the other hand, GFtoPK needn't be any the wiser: it only needs to know how to pack pixels and to copy specials, and it already does those. But then perhaps our e-MF (as opposed to MF e :-) would write PK files directly? Geoffrey Tobin ======================================================================== Date: Fri, 5 Nov 93 09:37:27 +0100 Reply-To: "NTS-L Distribution list" From: Reino de Boer Subject: Re: Color Just came up with an idea. Maybe it's old. Maybe it's unworkable. I'm going to propose it anyway. Isn't it possible to use METAFONT's font_coding_scheme to indicate a color for the `on-pixels' ? Something like font_coding_scheme := "COLOR: RED" to indicate a red font to the dvidriver. This way the color can be handled in the same way as other font-attributes like series, size, etc. Maybe it's even possible to change ``only the font_coding_scheme'' in a virtual font. Thinking aloud -- Reino -- Reino R. A. de Boer CS Dept, Faculty of Economics, Erasmus University Rotterdam email: sysrb@cs.few.eur.nl There exists a way to compile TeX (Knuth 1985). ======================================================================== Date: Fri, 5 Nov 93 09:46:08 +0100 Reply-To: "NTS-L Distribution list" From: Piet van Oostrum Subject: Re: Color In-Reply-To: <199311050839.AA25687@infix.cs.ruu.nl> >>>>> Reino de Boer (RdB) writes: RdB> Just came up with an idea. Maybe it's old. Maybe it's unworkable. RdB> I'm going to propose it anyway. RdB> Isn't it possible to use METAFONT's RdB> font_coding_scheme RdB> to indicate a color for the `on-pixels' ? RdB> Something like RdB> font_coding_scheme := "COLOR: RED" RdB> to indicate a red font to the dvidriver. Which red? There are thousands of red colors. This means thousands of PK files. This would be a wasteful multiplication of information as the pixels themselves don't change with the color. So the color information has to be separated from the pixel information (except when we have multicolored characters). RdB> This way the color can be handled in the same way as other font-attributes RdB> like series, size, etc. RdB> Maybe it's even possible to change ``only the font_coding_scheme'' in a RdB> virtual font. Series is not a font-attrribute in the Metafont sense. It is only used on the abstract level of the font in the NFSS. It selects a completely different font on the Metafont level. Size looks more like it, but it leads naturally to a different PK file. Piet* van Oostrum, Dept of Computer Science, Utrecht University, Padualaan 14, P.O. Box 80.089, 3508 TB Utrecht, The Netherlands. Telephone: +31 30 531806 Uucp: uunet!ruuinf!piet Telefax: +31 30 513791 Internet: piet@cs.ruu.nl (*`Pete') X-400: G=Piet;S=van.Oostrum;OU=cs;O=ruu;PRMD=surf;ADMD=400net;C=nl; ======================================================================== Date: Fri, 5 Nov 93 10:07:35 +0100 Reply-To: "NTS-L Distribution list" From: Bernd Raichle Subject: Re: 16 bit eTeX In-Reply-To: KNAPPEN@VKPMZD.KPH.UNI-MAINZ.DE's message of Thu, 4 Nov 93 18:48:35+0100 <9311050225.AA10661@ifi.informatik.uni-stuttgart.de> Joerg Knappen wrote: [... idea of an 16-bit TeX ...] I think that there's more and other things to change first to get a "real" 16-bit TeX than the things Joerg proposes. I have tried to mention the problems in my mailing with the \textchar proposal: * The \lccode array is used (better: misused!!) for more than lowercasing characters. a) When lowercasing some tokens, TeX works on the character tokens in a _token list_ (the "argument" of \lowercase{...}). Because of this restriction, it's not possible to lowercase an \ae, because \ae is not a character token (it's a control sequence). b) When TeX needs to hyphenate a word, it works on the horizontal list of the current paragraph. This horizontal list consists of _character/font nodes_. and some other nodes. The characters of the word to be hyphenated are extracted from the character/font nodes using the \lccode values. => IMHO it's necessary to separate the extraction of characters from the char/font nodes from the \lccode array information (and use another mechanism) when we discuss extension in the hyphenation algorithm of TeX. * TeX's math mode and its text mode are different. In math mode it's not necessary to hyphenate words (because there are no words). Font "changes" inside a formula are not possible, each math atom refers to a font math family/group and only the fonts are used which are associated with a math family at the _end_ of the formula. With the \textchar porposal I wanted to show that the existing concepts for the text and the math mode are different. I'm not sure if we should use the math mode concept or a similar concept for the text mode. > \etextcode = :::: > where I propose for the meaning of the numbers the following: > directionality (none implied/ left-right/ right-left/ top-bottom/ > text uppercase code (see below) > text lowercase code > fontencoding (some kind of textfam) > location in font. My opinions: Put these informations in the font. * The should be specified in the font. * Use symbolic names to specify a position of a glyph in the font. * Uppercase/lowercase characters are font dependent (if we assume that different "languages" use different fonts). * The directionality is font dependent. !!! Distinguish between the font glyphs and the character codes !!! we use to input a text. The directionality of a glyph doesn't depend on the character(s) we use to transcribe it in the input stream. We can use the same character `a' to input an english text or an arabic text. The output (glyphs) and the directionality depend on the font. Bernd Raichle ======================================================================== Date: Fri, 5 Nov 93 10:18:05 +0100 Reply-To: "NTS-L Distribution list" From: Reino de Boer Subject: Re: Color > Date: Fri, 5 Nov 93 09:46:08 +0100 > From: Piet van Oostrum > RdB> Something like > RdB> font_coding_scheme := "COLOR: RED" > RdB> to indicate a red font to the dvidriver. > > Which red? There are thousands of red colors. This means thousands of PK > files. This would be a wasteful multiplication of information as the pixels > themselves don't change with the color. So the color information has to be > separated from the pixel information (except when we have multicolored > characters). RED was only an example. There needs to be a `standard' for the color indication, of course. > RdB> This way the color can be handled in the same way as other font-attribute s > RdB> like series, size, etc. > RdB> Maybe it's even possible to change ``only the font_coding_scheme'' in a > RdB> virtual font. > Series is not a font-attrribute in the Metafont sense. It is only used on > the abstract level of the font in the NFSS. It selects a completely > different font on the Metafont level. Size looks more like it, but it leads > naturally to a different PK file. I did mean series, size, etc. I'm aware that it would select a completely different font, although not necessarily on the METAFONT level (if it can be done with virtual fonts*). It wouldn't even have to use a different PK file, only a VF file with color information. If it can't be handled through virtual fonts, then we would need different fonts on the METAFONT level, and yes, we would need different PK files (which, admittedly, would duplicate most of their information). Anyway, I was thinking of files like cmbx10.vf for a bold computer modern having color . Still dreaming of multicolored glyphs -- Reino * Strictly speaking virtual fonts could belong to the METAFONT level as well, but that's not the point. -- Reino R. A. de Boer CS Dept, Faculty of Economics, Erasmus University Rotterdam email: sysrb@cs.few.eur.nl There exists a way to compile TeX (Knuth 1985). ======================================================================== Date: Fri, 5 Nov 93 12:30:58 +0100 Reply-To: "NTS-L Distribution list" From: Piet van Oostrum Subject: Re: Color In-Reply-To: <199311050920.AA26770@infix.cs.ruu.nl> >>>>> Reino de Boer (RdB) writes: >> >> Which red? There are thousands of red colors. This means thousands of PK >> files. This would be a wasteful multiplication of information as the pixels >> themselves don't change with the color. So the color information has to be >> separated from the pixel information (except when we have multicolored >> characters). RdB> RED was only an example. There needs to be a `standard' for the color RdB> indication, of course. My point was not the encoding of the color but the number of colors. RdB> I did mean series, size, etc. I'm aware that it would select a completely RdB> different font, although not necessarily on the METAFONT level (if it can RdB> be done with virtual fonts*). It wouldn't even have to use a different RdB> PK file, only a VF file with color information. Still with modern systems having more than 16 million colors, would you want to have 16 million VF files for each font? AND 16 million tfm files. RdB> Anyway, I was thinking of files like RdB> cmbx10.vf RdB> for a bold computer modern having color . In an earlier post I proposed something like this (but then for the \font primitive), but let tex strip the color information before going to the OS, but putting the full fule name in the dvi file. In this way you would NOT ned different VF files nor different tfm files. Of course the dvi driver would also have to interpret and strip the color information from the filename. Besides it doesn't work for colored rules or areas. Piet van Oostrum ======================================================================== Date: Fri, 5 Nov 93 14:45:55 GMT Reply-To: "NTS-L Distribution list" Comments: Warning -- original Sender: tag was fx@DARESBURY.AC.UK From: Dave Love Subject: Re: Color In-Reply-To: <9311042104.AA10776@dlpx1> >>>>> On Thu, 4 Nov 93 11:45:15 +0100, Anselm Lingnau said: Anselm> There is obviously a difference in degree between being able Anselm> to use, say, two or three different colours in a document and Anselm> doing multicoloured maths or smoothly rainbow-coloured Anselm> titles. Indeed. Surely the original request wasn't to support artwork? I'd suggest typical uses such as: making slides, the bits-of-coloured-text style common in US textbooks or highlighting a hypertext button, but not the equivalent of a colour plate or general angry fruit salads. Anselm> anybody explain to me why it wouldn't be possible to put the Anselm> colour information into a virtual font and let the driver Anselm> worry about figuring out which colour to use for which glyph? You can. The problem is that virtual fonts don't have `begin font'/`end font' hooks. Thus you have to emit the `colour ' `colour default' information around each glyph, as far as I can see. You then take a big efficiency hit. It also means that you need one virtual font for each colour that you might use, although maybe you could have a few chameleon VFs with a parameterised colour which could each be initialised per document somehow with \specials. I'll be convinced you can get away with \specials alone iff someone clever supplies code those of us who've been there can't break (or Joachim supplies a rigourous proof :-)). My vote would be not to associate a colour property with individual fonts but have a `select colour' primitive analagous to `select font' with a new DVI instruction to affect fonts and rules, and `new colour' of course. Maybe revised VF technology would be sufficient, though, and how about making VFs mandatory for e-TeX drivers if there's still a need for such a thing with e-TeX? (`colour' might be generalised in the above.) ======================================================================== Date: Fri, 5 Nov 93 16:57:11 +0100 Reply-To: Mike Piff From: Mike Piff Subject: re: colour Are we going to ask for infra-red, ultra-violet and X-ray effects in eTeX too? Perhaps this rule could start in ultramarine and gradually shade into beta particle emissions ------------------------------------------------------------------------- and this one could be a hot shade of plutonium ========================================================================= The next one plays a tune when you look at it . . . -------- The mind boggles... Mike %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% Dr M J Piff %% e-mail: %% School of Mathematics and Statistics %% %% University of Sheffield %% M.Piff@sheffield.ac.uk %% Hicks Building %% %% SHEFFIELD S3 7RH %% Telephone: (0742) 824431 %% England %% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% ======================================================================== Date: Mon, 8 Nov 93 16:13:17 +0100 Reply-To: "NTS-L Distribution list" From: Bernd Raichle Subject: Four Concrete Proposals... I will try to propose four concrete extensions for e-TeX in the next four mails. The proposals are ready to implement in the way that I can either sketch the necessary changes in TeX.web for these proposals or... the necessary changes are already running on my local TeX version. Therefore it's possible that some of these extensions can be found in e-TeX in a short time... Comments please!! Bernd Raichle ======================================================================== Date: Mon, 8 Nov 93 16:14:19 +0100 Reply-To: "NTS-L Distribution list" From: Bernd Raichle Subject: Proposal: \interactionmode register The current user |interaction| mode is not accessible on the TeX macro level. Proposal for e-TeX: Add a new count register \interaction, which reflects the user interaction level with the following values: <= 0 batch_mode = 1 nonstop_mode = 2 scroll_mode >= 3 error_stop_mode The commands \batchmode...\errorstopmode, the "options" 'Q', 'R', 'S' in the interactive error routine and all other TeX internal interaction level changes (e.g. after an interruption) access this new register. The level changes in the interactive error routine and the old commands should always work, even if the symbol \interaction is redefined (this means that the user can redefine \interaction, but the commands \batchmode...\errorstopmode still work). Examples: \ifnum\interaction<1 \AskUser \else \UseDefault \fi {\interaction=0 % switch to \batchmode \global\font\test=xyz10 % try to load font }% % restore former interaction level % ... now test if font has been loaded % without error (i.e. != nullfont) Status: I have made and implemented the necessary changes in my local TeX version. They have to be tested and checked for forgotten things. Is this change sufficient? Further extensions necessary? ... Bernd Raichle ======================================================================== Date: Mon, 8 Nov 93 16:15:12 +0100 Reply-To: "NTS-L Distribution list" From: Bernd Raichle Subject: Proposal: separate \catcode`^=7 and ^^x notation TeX uses the character ^ to output unprintable character (the character is hardwird in TeX.web). Characters with catcode 7 are used for two purposes when reading text: superscript in math mode and input of characters using the ^^x notation. (See also: Michael Downes NTS-L posting of Mon, 1 Nov 93 and Phil Taylor's article "The Future of TeX" in the EuroTeX '92 proceedings.) Proposal for e-TeX: Use two new internal count registers \inputescapechar and \outputescapechar to specify the "escape" character to be used for unprintable characters. If \inputescapechar is not in [0..255], TeX's behaviour is used, i.e., two equal characters with category 7 are used as a prefix for a ^^x notated character, otherwise two characters with code \inputescapechar are used for this prefix. If \outputescapechar is not in [0..255], the character `^' is used when an unprintable character has to be written. The default values of these two registers are \inputescapechar = -1 \outputescapechar = `^ to be compatible with TeX's standard behaviour. Problems: What's the behaviour of e-TeX when the \outputescapechar is unprintable for this TeX implementation (remember that it is only necessary that a subset of ASCII is printable; more in TeXbook, end of Appendix C), e.g., \outputescapechar=`^^M Do we really want to make this possible? How to prevent such situations (e.g. by restricting the values of \outputescapechar to a subset of ASCII, which is printable for all TeX implementations)? Relation between \newlinechar and \outputescapechar? IMO \outputescapechar (and all other characters in the ^^x notation for an unprintable character) should never result in a written newline, which is TeX's behaviour for versions >= 3.141. Status: The necessary changes are simple (for TeX version >= 3.141). I have made changes for \outputescapechar in my local TeX version to allow the specification of all printable characters in a "TeX code page" definition (the \outputescapechar register changes are the only thing needed to complete the change). The problems mentioned above have to be discussed before e-TeX can contain this extension. Comments?! Bernd Raichle PS: Some time ago, Karl Berry sent a mail with "TeX 4" extensions to a list of people including the proposal of Paul Abrahams that says: ``make \message pay attention to \newlinechar [...]''. [Btw. the TeXbook doesn't mention the consequences of \newlinechar and \message or the log file output, there's only something said about \write.] DEK himself has included this "TeX 4" extension in TeX 3.141, because the former behaviour was implementation dependent. ======================================================================== Date: Mon, 8 Nov 93 16:15:39 +0100 Reply-To: "NTS-L Distribution list" From: Bernd Raichle Subject: Proposal: \mathspacing made available and changeable TeX--The Program, \S 764: ``The inter-element spacing in math formulas depends on a 8x8 table that TeX preloads as a 64-digit string. [...]'' The spacing can not be changed without changes of TeX.web itself. Proposal for e-TeX: Add a new count register array \mathspacing with 64 entries (we really need only 64-8=56 entries, because some of them are never used, but to simplify things 64 are used) with the following syntax: \mathspacing = % 0 <= <= 63 The spacing specified in number is inserted between the two math atom types specified in number . The two numbers are coded as = * 8 + = ( ( * 256 + ) * 256 + ) * 256 + This means that is easily expressed in octal and in hexadecimal notation. <..._atom_type> is one of the following seven types: 0 ordinary 1 large operator 2 binary operation 3 relation 4 opening 5 closing 6 punctuation 7 delimited subformula <..._spacing> can be specified separately for each of the four math styles (display, text, script and scriptscript) with the following values: 0 no space 1 thin space (specified by \thinmuskip) 2 medium space ( -- " -- \medmuskip) 3 thick space ( -- " -- \thickmuskip) 4-255 reserved for other things (e.g. other spacings and/or additional penalties, like \relpenalty, \binoppenalty, ...) For more information see TeXbook, pp. 170f & Appendix G and TeX--The Program, \S 764ff. Examples: (using TeX's standard spacing) Between an `ordinary' (= 0) and a `relation' (= 3) atom a thick space (= 3) is inserted, but not in script or scriptscript style. \mathspacing '03 = "0033 Between a large operator (= 1) and an ordinary (= 0) atom a thin space (= 1) is inserted: \mathspacing '10 = "1111 Status: Necessary changes for the sketched proposal are very simple to implement. The syntax of \mathspacing is awful, but this is true for \mathcode, \delimitercode, etc.etc., too. Can someone comment on the need for such a change? (I have not enough knowledge and experience with TeX's math mode, but I remember that someone has mentioned this in an TUGboat(??) article.) Is this change sufficient? Further extensions necessary? ... Bernd Raichle ======================================================================== Date: Mon, 8 Nov 93 16:17:16 +0100 Reply-To: "NTS-L Distribution list" From: Bernd Raichle Subject: Proposal: change the ligature builder/kerning routines in TeX The following is quoted from Jackowski and Ry\'cko, "Polishing TeX..." in: EuroTeX '92 Proceedings of the 7th European Conference, pp. 119--134: ``3.1 Ligatures and kerns TeX inserts implicit kerns around ligatures in the following way: * the kern between a single character, say `v', and a ligature consisting of characters, say `xyz', is the same as the kern between `v' and `x' (the first character of the ligature); * the kern between a ligature `xyz' and a single character `w' is, in general, specific for this ligature and this character, and may be different from the kern between the single character `z' and the character `w'. This approach to inserting kerns aroung ligatures works well in the case of traditional ligatures, usually resembling the sequence of component characters. However, with the diacritical characters accessed via ligatures the matter is different: the ligature {\it is not\/} similar to the component characters. Assume, e.g., that Polish letters are coded as ligatures consisting of the slash `/' and a letter: /a, [...], /z. This would result in the same kerning before these characters as the kerning before the slash. [...]'' Proposal for e-TeX: Replace the (IMHO over-)optimized ligature builder/kerning routines in TeX by routines which separate between the building of ligatures and the insertion of kerns between the characters/ligatures. This can be realized in a simple way by using two passes: in the first pass the ligatures are built and in the second pass the resulting ligatures and remaining single characters are used to determine the necessary kerns. To ensure compatibility between e-TeX and TeX, it will be possible to switch between the new and the current behaviour. Additionally a flag in the TFM file of each font can be used to specify which behaviour is to be used for the font. This ensures that "old" fonts with some tricky ligature/kerning programs depending on the old behaviour can still be used with e-TeX. (I don't know if this font dependent switching is really necessary. Comments, please!!) Example: A font contains the following ligatures and kerns: o " => ligature (o") (= \"{o}) V o => kern(-smallkern) V lig(o") => nothing Input: V o " Output of current TeX: V kern(-smallkern) ligature(o") Output with change: V ligature(o") Status: I have written a simple, but running reimplementation of TeX's ligature/kerning routine (in CommonLisp), which still waits to be rewritten as a TeX.web change file. The ligature builder/kerning routine is realized in one pass; kerns are introduced in a delayed manner, i.e., after we are sure that there's no possibility for a ligature. Additionally there's a switch between the current TeX and the new behaviour. (The TRIP test fails with the new behaviour.) Comments?! Comments on the font dependent switch? Bernd Raichle PS: IMHO the ligature/kerning routines should be further changed to remove the `shelf{}ful' anomaly (see TeXbook, exercise 5.1), i.e., reinserting ligatures when words are hyphenated. The change should allow ligatures for inputs like `f{}f' or `f\relax f', which will simplify the macros in `german.sty', Babel and changed macros for \", \', ... which are used to select characters from DC fonts or other fonts with national characters. ======================================================================== Date: Mon, 8 Nov 93 16:41:05 +0100 Reply-To: "Philip Taylor (RHBNC)" From: P.TAYLOR@RHBNC.AC.UK Subject: Re: ?interactionmode Sounds good to me, but the title and proposed implementation are inconsistent: in the title, you refer to \interactionmode; in the message proper, \interaction. Which do you actually propose? Philip Taylor, RHBNC ======================================================================== Date: Mon, 8 Nov 93 16:56:06 +0100 Reply-To: "NTS-L Distribution list" From: Bernd Raichle Subject: Re: \interactionmode In-Reply-To: P.TAYLOR@RHBNC.AC.UK's message of Mon, 8 Nov 93 16:41:05 +0100 <9311081544.AA20327@ifi.informatik.uni-stuttgart.de> > Sounds good to me, but the title and proposed implementation are > inconsistent: in the title, you refer to \interactionmode; in the > message proper, \interaction. Which do you actually propose? \interactionmode Sorry for this and other inconsistencies, but I've written the four proposals yesterday late at night. And today I had not enough time to reread them carefully. Joerg Knappen proposed \currentinteractionmode (see NTS-L message of Mon, 30 Aug 93 and Joerg's NTS FAQ), but this name is too long and there is no other \...interactionmode than the current one. Bernd ======================================================================== Date: Mon, 8 Nov 93 18:59:05 GMT Reply-To: RHBNC Philip Taylor From: P.TAYLOR@RHBNC.AC.UK Subject: RE: Proposal: separate \catcode`^=7 and ^^x notation >>> Proposal for e-TeX: >>> Use two new internal count registers \inputescapechar and >>> \outputescapechar to specify the "escape" character to be used for >>> unprintable characters. Again generally in agreement, but I feel that the terminology is far too confusing; the term `escape character' in TeX is properly used only when applied to a character whose character code is 0; to usurp that terminology, albeit with pre-modifiers "input" and "output", seems to risk terrible confusion. What we need are descriptors which accurately define the functionality of the two characters; the first is used to introduce one of two distinct entities, and it may be that _two_ "inputescapecharacter"s are required, for the two functions are quite distinct: one introduces a single offset character (i.e. one whose character code is 64 displaced from its replacement), and the other introduces a pair of lower-case hex characters, whose combination represents the hex value of the replacement character. Thus I suggest that we need appropriate terms for the following: For the character which, when paired and followed by a single (usually printable) character, yields a character displaced by 64 from the single character which so follows; For the character which, when paired and followed by a pair of lower-case hexadecimal characters, yields a character whose character code is equal to the hexadecimal number so formed; And for the equivalent characters to be used for output (p.^370 suggests that both forms can occur in output). Thus we need four distinct control sequences, unless either the input and output characters are specified (in this proposal) to be identical, or unless the two prefix characters for each of the representations above is similarly specified to be identical. Given the reason for this RFC, it seems that we believe that DEK made a mistake in overloading catcode 7, and we should therefore avoid the same mistake in specifying the set of new "escape" characters; therefore I propose that we nominate all four characters through unique control sequences. So as to avoid overloading the semantics of "escape", I propose that we term them "prefix" characters, rather than "escape". And therefore I propose the following terminology as a first approximation: \displacementprefixinputcharacter \hexadecimalprefixinputcharacter \displacementprefixoutputcharacter \hexadecimalprefixoutputcharacter Now even I, who infinitely prefers lucidity to terseness, agree that these are too long for convenient use, and therefore propose the following consistently derived abbreviations (i.e. I do not propose that the four control sequences above be defined within e-TeX; they are simply used within this document to identify the four entities with which we are concerned; instead I propose the following as the four canonical forms): \offsetprefixinputchar \hexprefixinputchar \offsetprefixoutputchar \hexprefixoutputchar The reason for substituting `offset' for `displacement' is that contractions of the latter typically end in `p', which would result in two consecutive `p's in the short-form name; as the name is already a contraction, there would be a tendency to accidentally omit one of the two `p's, thereby yielding an unknown control sequence. Philip Taylor, RHBNC ======================================================================== Date: Mon, 8 Nov 93 23:15:05 CET Reply-To: "NTS-L Distribution list" From: Nicolas Subject: Re: Proposal: change the ligature builder/kerning routines in TeX In-Reply-To: Message of Mon, 8 Nov 93 16:17:16 +0100 from To late for me, TeX is now able to handle 8 char in input. Thus this trick is no more of a great interest. But to speak of TFMs, i have other propositions: 1. no reloading of a TFM at a different scalefactor when one size is already loaded; 2. share (identical) ligatures program. I don't think that will introduce to much incompatibilities, even in the case of test of different small variation of a font, because in this case chance are that scalefactor would be identical. Nicolas Jungers anorsu at vm1.rice.ucl.ac.be ======================================================================== Date: Tue, 9 Nov 93 01:50:57 CET Reply-To: "NTS-L Distribution list" From: Nicolas Jungers Subject: Questions and thoughts I forgot to take care of the dynamic handling of the \fondimen in my TFMs remarks. Now it's done. I suppose that, in this case, TeX have to duplicate the whole TFM. I have somes (maybe stupid) questions and somes (deep?) thought. 1. hyphenation - Is there a GOOD reason to use _explicit_ kern with the accent primitive (except to inhibate hyphenation)? - In TeX you can't (usually) hyphenate the first word of a paragraph. Sound's like a good rule, but it's maybe just a feature? What is the rule by exemple in German where the first word can be a 180 letters word? 2. scale factor There is the posibility to scale a whole document, why not a single box (in this case all TFM transformations are linear). 3. consistency I still find sad that a hardware failure (math. copro.) didn't reveal itself in any log but in the DVI. 4. color TeX isn't a graphic program. The only need _inside_ TeX is to give _simple_ color atribute to _simple_ objects (rules and chars). If you want complex colored objects (like a black circle), or complex variations of shades, use a graphic program or language. The true problem is on the driver side. Rules can be: - the driver handle color separation, trapping, etc for simple objects. All the information can be handle by \special ("just" need some standard), even trapping and the like. - TeX and the driver are in NO way qualified to handle complex color processing, separation must be handled by the graphic package (as usual in color processing). There is actually NO DTP program able to fully handle color (it's just to complicated). Even QuarkXpress, one of the best in this aspect fail miserably in complex cases. And all professional color processing are made with PostScript (at worst), thus all complex color processing is left to a PostScript device. Conclusion: - TeX didn't have to handle color. - The driver must be able to provide simple color separation, trapping, ... Complex objects are given to the driver after processing with minimal informations: "this file is the cyan one, just put it in the cyan plate (and DON'T make any further processing on it)". 5. visual design This is a generalisation of a problem evocated about color in TeX. Sometimes you need to give the priority to the visual design. TeX is often unable to handle such situation. Think to a begining of a chapter. You want that the first page use a different \hsize and a different \baseline whitout any consideration of the structure of the text. It's a borderline case, you can handle this in TeX, but what about a different font/font size? Nicolas Jungers Anorsu at BUCLLN11.bitnet ======================================================================== Date: Tue, 9 Nov 93 09:08:40 +0100 Reply-To: "NTS-L Distribution list" From: Bernd Raichle Subject: Re: Questions and thoughts In-Reply-To: Nicolas Jungers's message of Tue, 9 Nov 93 01:50:57 CET <9311090055.AA25547@ifi.informatik.uni-stuttgart.de> Nicolas Jungers said on Tue, 9 Nov 93 01:50:57 CET: NJ> I have somes (maybe stupid) questions and somes (deep?) thought. NJ> 1. hyphenation NJ> - Is there a GOOD reason to use _explicit_ kern with the accent primitive NJ> (except to inhibate hyphenation)? One reason I can think of is the following: An `\"{a}' (using the \accent primitive) produces the following sequence: kern `"' kern `a' If a font with these kerning is used, the input `"a' can produce: kern `"' kern `a' Now TeX has to distinguish between these two sequences when hyphenating words. Another reason is in the statement that DEK often uses. He says that you have to use fonts with already accented characters to allow the hyphenation of words with these characters. NJ> - In TeX you can't (usually) hyphenate the first word of a paragraph. Sound's NJ> like a good rule, but it's maybe just a feature? What is the rule by exemple in NJ> German where the first word can be a 180 letters word? You hyphenate these long words and you will hyphenate words with an explicit hyphen, if the parts are really long. The complete hyphenation algorithm is another (large!) section which have to be changed to really support hyphenation of exotic languages like german, dutch, french, ... (with accented chars and exceptions like ck -> k-k). NJ> 3. consistency NJ> I still find sad that a hardware failure (math. copro.) didn't reveal itself in NJ> any log but in the DVI. Ok. e-TeX (and all other tools used with TeX) will have test routines included in their initialization routines. These routines will include tests of all possible hardware components (including the prozessor!) and tests of all software components (operating system, dynamic libraries, hardware drivers, tools, etc.), which are used by this program (directly and indirectly). The only things you will need are: atleast 16MB (lower bound of estimation) of additional main memory or swap space and the same (or better double) amount of harddisk space for the included test code.... The test routines are ready to be released in june for all architectures. Bernd Raichle PS: Oh, I've forgotten to tell you the year in which the test routines will be released: It's the year &*%&^%#&($program aborted (nonsense test succeeded) ======================================================================== Date: Tue, 9 Nov 93 09:40:35 +0100 Reply-To: "NTS-L Distribution list" From: Bernd Raichle Subject: Re: Proposal: change the ligature builder/kerning routines in TeX In-Reply-To: Nicolas's message of Mon, 8 Nov 93 23:15:05 CET <9311082236.AA05109@ifi.informatik.uni-stuttgart.de> > To late for me, TeX is now able to handle 8 char in input. Thus this trick > is no more of a great interest. You need more than 256 different characters to typeset some languages. And sometimes you are unable to input some languages directly using your normally used keyboard. In these cases you have to (or better: you can) use a transliteration scheme to input the text (e.g. the transliteration scheme used for the cyrillic wncyr* fonts). Another example: Think about arabic texts where the typeset character glyph depends on the "context" of this character in the word: An isolated character, a character at the beginning/ending/in the middle of a word will produce a different glyph. Do you want to type four different keys to select the correct glyph for one character?? [There are different approaches: use macros (ArabTeX by Klaus Lagally), a separate preprocessor (ATeX by Terry Regier) and/or ligature programs (Yannis Haralambous arabic fonts).] And finally: You are already using a transliteration scheme for some characters in TeX! Do you really have a key for the characters `` '' -- --- '? '! on your keyboard? Did you use these keys with TeX 3.x, why not with TeX 2.x? Bernd Raichle ======================================================================== Date: Tue, 9 Nov 93 11:30:59 EST Reply-To: "NTS-L Distribution list" From: Werenfried Spit Subject: Re: Proposal: change the ligature builder/kerning routines in TeX In-Reply-To: Message of 09 Nov 1993 09:40:35 +0100 from On 09 Nov 1993 09:40:35 +0100 Bernd Raichle said: > >You are already using a transliteration scheme for some characters in >TeX! Do you really have a key for the characters `` '' -- --- '? '! >on your keyboard? Did you use these keys with TeX 3.x, why not with >TeX 2.x? In fact I have these keys. For '? and 'Y I always use them, for ``and '' sometimes. ------------------------------------------------ Werenfried Spit email: spit@vm.ci.uv.es tel: +34-6-386 4550 Dep. de Fisica Teorica, Universitat de Valencia ======================================================================== Date: Tue, 9 Nov 93 12:10:17 +0100 Reply-To: "NTS-L Distribution list" From: seroul@MATH.U-STRASBG.FR Subject: Re: Proposal: \interactionmode register >The current user |interaction| mode is not accessible on the TeX macro >level. > > >Proposal for e-TeX: > >Add a new count register \interaction, which reflects the user >interaction level with the following values: > <= 0 batch_mode > = 1 nonstop_mode > = 2 scroll_mode > >= 3 error_stop_mode > >Is this change sufficient? Further extensions necessary? ... I would like a total shut-up mode for inexperienced (and experienced) users, R. Seroul Laboratoire de Typograhie Informatique Strasbourg ======================================================================== Date: Tue, 9 Nov 93 13:27:28 CET Reply-To: "NTS-L Distribution list" From: Nicolas Jungers Subject: Re: Questions and thoughts In-Reply-To: Message of Tue, 9 Nov 93 09:08:40 +0100 from NJ> 1. hyphenation NJ> - Is there a GOOD reason to use _explicit_ kern with the accent primitive NJ> (except to inhibate hyphenation)? Bernd Raichle answer: BR> One reason I can think of is the following: BR> An `\"{a}' (using the \accent primitive) produces the following sequence: BR> kern `"' kern `a' BR> If a font with these kerning is used, the input `"a' can produce: BR> kern `"' kern `a' BR> Now TeX has to distinguish between these two sequences when BR> hyphenating words. OK, but: The first sequence is a valid characters in a word, hence hyphenation must be allowed. Hence the priority is to allow hyphenation of accented letters, and after, find a mechanism able to discriminate patterns matching accented letters and patterns matching row of chararcters. I don't think that can be easy, but accented letters are a widespread reality. BR> Another reason is in the statement that DEK often uses. He says that BR> you have to use fonts with already accented characters to allow the BR> hyphenation of words with these characters. But TeX is still using 256 letters font, not enought for this scheme. More, lot of PostScript fonts provide only few already accented letters, but are providing the accents. And yes you can use VF, but that's not the fastest way. NJ> 3. consistency NJ> I still find sad that a hardware failure (math. copro.) didn't reveal itself in NJ> any log but in the DVI. BR> Ok. e-TeX (and all other tools used with TeX) will have test routines BR> included in their initialization routines. These routines will BR> include tests of all possible hardware components (including the BR> prozessor!) and tests of all software components (operating system, BR> dynamic libraries, hardware drivers, tools, etc.), which are used by BR> this program (directly and indirectly). BR> The only things you will need are: atleast 16MB (lower bound of BR> estimation) of additional main memory or swap space and the same (or BR> better double) amount of harddisk space for the included test code.... BR> The test routines are ready to be released in june for all BR> architectures. That's definitively not what i want. I want to point that TeX is using internally _two_ different way do make the same thing. My wish is: "use or not the math copro, but do it consistently". Now, urban legend claims that "TeX don't use a math copro", and it's almost true. Imagine the difficulties of a lambda user of TeX, with TeX itself claiming that everything is fine. Sure there is no absolute workaround for this problem, but i still find my wish reasonable. Nicolas Jungers Anorsu at BUCLLN11.bitnet ======================================================================== Date: Tue, 9 Nov 93 13:28:13 CET Reply-To: "NTS-L Distribution list" From: Nicolas Jungers Subject: Re: Proposal: change the ligature builder/kerning routines in TeX In-Reply-To: Message of Tue, 9 Nov 93 09:40:35 +0100 from > To late for me, TeX is now able to handle 8 char in input. Thus this trick > is no more of a great interest. BR> You need more than 256 different characters to typeset some languages. BR> And sometimes you are unable to input some languages directly using BR> your normally used keyboard. In these cases you have to (or better: BR> you can) use a transliteration scheme to input the text (e.g. the BR> transliteration scheme used for the cyrillic wncyr* fonts). BR> Another example: BR> Think about arabic texts where the typeset character glyph depends on BR> the "context" of this character in the word: An isolated character, a BR> character at the beginning/ending/in the middle of a word will produce BR> a different glyph. Do you want to type four different keys to select BR> the correct glyph for one character?? BR> [There are different approaches: use macros (ArabTeX by Klaus BR> Lagally), a separate preprocessor (ATeX by Terry Regier) and/or BR> ligature programs (Yannis Haralambous arabic fonts).] BR> And finally: BR> You are already using a transliteration scheme for some characters in BR> TeX! Do you really have a key for the characters `` '' -- --- '? '! BR> on your keyboard? Did you use these keys with TeX 3.x, why not with BR> TeX 2.x? Yes, even with TeX 2.x, and with an editor showing bold, italic, ... The point for me is that cna't be a priority whitout other substential changes: 16 bits chars and the like, extended TFM proprierties, ... Nicolas Jungers Anorsu at BUCLLN11.bitnet ======================================================================== Date: Tue, 9 Nov 93 12:49:21 GMT Reply-To: "NTS-L Distribution list" From: Tim Bradshaw Subject: Re: Questions and thoughts In-Reply-To: <9311091231.aa09792@uk.ac.ed.castle> * Nicolas Jungers wrote: > That's definitively not what i want. I want to point that TeX is using > internally _two_ different way do make the same thing. My wish is: > "use or not the math copro, but do it consistently". Now, urban legend > claims that "TeX don't use a math copro", and it's almost true. > Imagine the difficulties of a lambda user of TeX, with TeX itself > claiming that everything is fine. Sure there is no absolute workaround > for this problem, but i still find my wish reasonable. If what you mean is `use or don't use native floating-point' then I think that's reasonable, and I think that probably the sensible answer is `don't use'. However any system written in a high-level langauge can never have any control over whether or not to use a maths coprocessor, even on systems where the term is meaningful. --tim ======================================================================== Date: Tue, 9 Nov 93 18:07:12 GMT Reply-To: RHBNC Philip Taylor From: P.Taylor@RHBNC.AC.UK Subject: RE: Proposal: \mathspacing made available and changeable >>> Proposal for e-TeX: >>> Add a new count register array \mathspacing with 64 entries (we really >>> need only 64-8=56 entries, because some of them are never used, but to >>> simplify things 64 are used) with the following syntax: >>> \mathspacing = % 0 <= <= 63 >>> The spacing specified in number is inserted between the two >>> math atom types specified in number . The two numbers are coded >>> as [...] Yes, I support this proposal, and I have needed it in the past (I cannot remember whether I discussed the need for it in a TUGboat article, but it is distinctly possible). The proposed syntax does not worry me unduly: we can always conceal it in a more elegant macro if a cleaner user interface is required. But there are some unposed (and therefore unanswered) questions which I think need addressing: 1) Is the register write-only or read/write; if read/write, are the elements accessed in a manner analogous to \fontdimen? 2) What is the scope of a change to (an element of) \mathspacing? If I change it within a nested group within maths mode, will the value at the end of the group be used throughout the group, or will the value obtaining at the maths-off determine the spacing throughout the maths list? Philip Taylor, RHBNC ======================================================================== Date: Tue, 9 Nov 93 19:17:24 CET Reply-To: "NTS-L Distribution list" From: bbeeton Subject: Re: Proposal: change the ligature builder/kerning routines in TeX In-Reply-To: <01H53IKVE6XUO2JE9P@MATH.AMS.ORG> i don't want to get into any "font wars" (i've been fighting them in a working group of an international standards organization committee since 1986; there's nastier infighting there than most tex folk can even imagine!), but ... tex has got input (coded characters) and output (glyphs) confused to some extent; at least the user view is usually a bit confused. i don't think any tex user would think of the ffi ligature as an input code/character. (bernd raichle's example of `` '' -- --- '? '! is good here, though he does call them "characters", which is natural, as that's the term that would have been used before either computers or computer standards people came along and supplied their own new, exclusive definitions.) the character complement of the modern russian alphabet is 33 letters. (there used to be more, and there are more, and/or different, in ukrainian, bulgarian, ...) the basic coded character set for most computers is based on the complement of the (american) english alphabet, and all (iso) standard extensions and commercial "code pages" are in certain ways compromises; so are tex fonts. the correspondence between what gets typed on a keyboard and the internal and external (output) representations may get closer, but it will never be one-to-one. anyhow, what caused me to respond to bernd's message is what he says about arabic: Think about arabic texts where the typeset character glyph depends on the "context" of this character in the word: An isolated character, a character at the beginning/ending/in the middle of a word will produce a different glyph. Do you want to type four different keys to select the correct glyph for one character?? i hope it's (almost) never necessary to think it might be necessary to type in the one-of-four selection of the proper arabic glyph. (the "almost" addresses the ability to generate things like example tables of glyphs out of context. and for really elegant composition, there are even more variations, based on the underlying calligraphic model of arabic script.) one really wants the input/internal representation to reflect things like collating sequence and be usable for operations like sorting and searching -- and identification of hyphenation points; glyph selection should ordinarily be "only" a post-processing composition function. the task that faces e-tex contributors, then, is to come up with effective ways of keeping these functions separate and implementing them efficiently and with minimum demands on the user without compromising the quality of the typeset output, regardless of the language being set. -- bb ======================================================================== Date: Tue, 9 Nov 93 22:03:19 GMT Reply-To: "NTS-L Distribution list" From: spqr@FTP.TEX.AC.UK Subject: suggestions for NTS-L i was thinking about the suggestions we have seen so far for NTS-L. They range from inchoate wishes about colour from to perfecftly-formed gems from Bernd, with impossibly long lists from Geoffrey in the middle (sorry, Geoffrey, not meant as a criticism). Can i ask the shadowy NTS team (by which mean, who is it apart from Phil?) to come back with some reactions on the *type* of requests coming in? The reason I bring this up is because Bernd's ideas are so precise, so reasonable, and yet (to me) so esoteric. I am a sure they are needed, but it may be years before I use them.... I wonder could NTS, when it starts, plan a series of sexy features, as well as these small wonders? To keep the punters interested? Sebastian PS yes i can type, and spell english, but i cannot control this slow modem line ======================================================================== Date: Wed, 10 Nov 93 04:57:10 CET Reply-To: "NTS-L Distribution list" From: Michael Downes Subject: RE: Proposal: separate \catcode`^=7 and ^^x notation In-Reply-To: <01H52Q9RL7SMO2JKZ3@MATH.AMS.ORG> Bernd Raichle proposed \inputescapechar and \outputescapechar. Phil Taylor suggested: \offsetprefixinputchar \hexprefixinputchar \offsetprefixoutputchar \hexprefixoutputchar allowing the possibility of handling three- and four-character sequences separately. Can we get opinions by more tex.web experts on possible implementation methods? Speed cost of various methods might also deserve consideration in deciding what would be a suitable interface, and whether the extra flexibility suggested by Phil is worth its cost (whatever that may be). (Maybe it could even increase file reading speed to separate the three- and four-character possibilities?) What about the possibility in current TeX to have two different catcode-7 characters at the same time, and construct special trios or quartets with either character? This would no longer be possible with either of the above proposals. Practically speaking, I don't see any serious drawback to losing this possibility, but it was, for example, one of the main points in Exercise 8.6 in the TeXbook. BTW, past precedent in TeX for the names consists of: \escapechar Character to use in printing control sequence names in screen, log, or other files \newlinechar Character to cause a new line in output to screen, log, or other files \endlinechar Character to add at the end of each line read from a file \hyphenchar Character to be inserted during hyphenation \skewchar Character to use in calculating math accent skews (cf. also \defaulthyphenchar and \defaultskewchar). Special three-character sequences are referred to as `trigraphs' in Kernighan & Ritchie's book `The C Programming Language'. That could extend to (take your pick) tetragraph, fourgraph, or quadrigraph for four characters. If best implementation were via extended catcode range then the analogy to \escapechar would be closer and we would only need one or two new primitive names to control output of the characters (take your pick) \offsetprefixchar, \hexprefixchar \trigraphchar, \tetragraphchar ... If I were writing TeX over from scratch I would probably only use four-character hexadecimal sequences, for simplicity, and not allow the three-character sequences. But backward compatibility issues make that impossible for e-TeX. Michael Downes mjd@math.ams.org (Internet) ======================================================================== Date: Wed, 10 Nov 93 15:04:07 +1100 Reply-To: "NTS-L Distribution list" From: ecsgrt@LUXOR.LATROBE.EDU.AU Subject: Re: Proposal: \interactionmode register I second R. Seroul's suggestion: % I would like a total shut-up mode for inexperienced (and experienced) % users, `Software Tools in Pascal, incorporating TeX'. :-) That makes at least three in favor of a silent mode for TeX. Geoffrey Tobin ======================================================================== Date: Wed, 10 Nov 93 11:31:53 CET Reply-To: "NTS-L Distribution list" From: Joachim Schrod Subject: Re: Proposal: \interactionmode register In-Reply-To: <199311100407.AA45635@rs3.hrz.th-darmstadt.de> from"ecsgrt@LUXOR.LATROBE.EDU.AU" at Nov 10, 93 03:04:07 pm You wrote: > > I second R. Seroul's suggestion: > > % I would like a total shut-up mode for inexperienced (and experienced) > % users, What is the difference between the ``total shut-up mode'' and batchmode? Joachim PS: I always shudder when I see papers in conference proceedings where authors didn't care for (at least 40pt) overfull hboxes. They've written weeks on their paper and now they don't have the five minutes to insert a few hyphenation points... Something which is IMO encouraged by \batchmode or TeX UIs like AUC-TeX. -- =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Joachim Schrod Email: schrod@iti.informatik.th-darmstadt.de Computer Science Department Technical University of Darmstadt, Germany ======================================================================== Date: Fri, 12 Nov 93 13:33:59 +0100 Reply-To: "Philip Taylor (RHBNC)" From: P.Taylor@RHBNC.AC.UK Subject: Report on the inaugural meeting of the NTS group, September 1993 Report on the Inaugural Meeting of the NTS Core Group: September, 1993. ========================================================================= This is a report on the inaugural meeting of the NTS (`New Typesetting System') project group, held during the autumn DANTE meeting at Kaiserslautern (Germany) on 23rd and 25th September, 1993. Present: Joachim Lammarsch (DANTE President, and instigator of the NTS project); Philip Taylor (Technical co-ordinator, NTS project); Marion Neubauer (minutes secretary); Prof. Dr. Peter Breitenlohner, Mariusz Olko, Bernd Raichle, Joachim Schrod, Friedhelm Sowa. Background: Although the NTS project has been in existence for approximately eighteen months, there has not previously been a face-to-face meeting of members of the core group; at the Spring meeting of DANTE Rainer Sch\"opf announced his resignation as technical co-ordinator, and Philip Taylor was invited by Rainer and Joachim to take over as co-ordinator, which he agreed to do. Joachim Lammarsch opened the Autumn meeting by reviewing the history of the project and the rationale which lay behind its creation; each member of the group then briefly reviewed his or her particular area of interest in the project, after which the group received an extended presentation from Joachim Schrod on one possible approach to the realisation of NTS. The members of the group were broadly in support of the approach outlined by Joachim Schrod, and it was \stress {agreed} that this should form the basis for discussions at the meeting. The approach proposed by Joachim may be summarised as follows: {\TeX} in its present form is not amenable to modification; the code, although highly structured in some ways, is also painfully monolithic in others, and any attempt to modify the present code in anything other than trivial ways is almost certainly doomed to failure. Accordingly, before attempting to modify {\TeX} in any way it is first necessary to re-implement it, the idea behind such re-implementation being to eliminate the interdependencies of the present version and to replace these with a truly modular structure, allowing various elements of the typesetting process to be easily modified or replaced. This re-implementation should be undertaken in a language suitable for rapid prototyping, such as the Common Lisp Object System (`CLOS'). The primary reason for the re-implementation is to provide modularisation with specified internal interfaces and therby provide a test bed, firstly to ensure that {\TeX} has been properly re-implemented and subsequently to allow the investigation of new typesetting paradigms. Once a working test bed has been created, and compatibility with existing {\TeX} demonstrated, a second re-implementation will be undertaken; this re-implementation will have the same modular structure as the test bed but will be implemented with efficiency rather than extensibility in mind, and will be undertaken using a combination of literate programming and a widespread language with a more traditional approach, such as `C++'. When this second version has also been demonstrated to be compatible with {\TeX}, it will be made available to implementors around the world, the idea being to encourage people to migrate to NTS by demonstrating its complete compatibility with {\TeX} (the test bed will also be made available if there is interest shewn in its use). Thereafter new ideas and proposals will be investigated using the test bed, and if found to be successful these will be re-implemented in the distribution version. The main problem which the group identified with the approach outlined by Joachim was simply one of resources: in order to accomplish two re-implementations within a reasonable time-scale, it would be essential to use paid labour, it being estimated that each re-implementation requires a minimum of four man-months work to produce a prototype, and eight man-months to reach the production stage. As this is far beyond the ability of members of the group to contribute in the short term, it is clearly necessary to employ a small team (of between two and four members) to carry out the re-implementations under the guidance and supervision of one or more members of the core group. Initial costings suggested that this could not be accomplished within the present financial resources of the group, and accordingly it was \stress {agreed} that Joachim Lammarsch should seek further financial support. Subsequent investigations shewed that a quite significant reduction in costs could be achieved if the programming team were sited in a central or eastern European country, particularly if the members of the team were also residents of the country; this approach is being investigated. As it was obvious that no immediate progress could be made with Joachim Schrod's proposal, even though the group agreed that it represented an excellent philosophical approach, it was also \stress {agreed} that the group needed to identify some fallback approaches, which could (a)~be commenced immediately, and (b)~would be of significant benefit to the {\TeX} community at large. The group identified two such projects, these being (1)~the specification of a canonical {\TeX} kit, and (2)~the implementation of an extended {\TeX} (to be known as e-{\TeX}) based on the present WEB implementation. It was also \stress {agreed} that Marek Ry\'cko \& Bogus{\l}aw Jackowski would be asked if they were willing to co-ordinate the first of these activities, and that Peter Breitenlohner would co-ordinate the second. The ideas behind the two proposals are as follows. (1)~The canonical {\TeX} kit: at the moment, the most that can be assumed of any site offering {\TeX} is (a)~ini{\TeX}; (b)~plain {\TeX}; (c)~{\LaTeX}; and (d)~at least sixteen Computer Modern fonts. Whilst these are adequate for a restricted range of purposes, it is highly desirable when transferring documents from another site to be able to assume the existence of a far wider range of utilities. For example, it may be necessary to rely on BibTeX, or on MakeIndex; it may be useful to be able to assume the existence of BM2FONT; and so on. Rather than simply say ``all of these can be found on the nearest CTAN archive'', it would be better if all implementations contained a standard subset of the available tools. It is therefore the aim of this project to identify what the elements of this subset should be, and then to liaise with developers and implementors to ensure that this subset is available for, and distributed with, each {\TeX} implementation. (2)~Extended {\TeX} (e-{\TeX}): whilst the test bed and production system approach is philosophically very sound, the reality at the moment is that the group lacks the resources to bring it to fruition. None the less, there are many areas in which a large group of existing {\TeX} users believe that improvements could be made within the philosophical constraints of the existing {\TeX} implementation. E-{\TeX} is an attempt to satisfy their needs which could be accomplished without a major investment of resources, and which can pursued without the need for additional paid labour. Finally the group agreed to individually undertake particular responsibilities; these are to be: Peter Breitenlohner: Remove any existing incompatibilities between {\TeX}--{\XeT} and {\TeX}, with the idea of basing further e-{\TeX} developments on {\TeX}--{\XeT}; liaise with Chris Thompson concerning portability of the code; produce a catalogue of proposed extensions to e-{\TeX}. Joachim Lammarsch: liaise with vendors and publishers in an attempt to raise money for the implementation of NTS proper; arrange a further meeting of interested parties; liaise with Eberhard Mattes concerning the present constraints on the unbundling of em{\TeX}; negotiate with leading academics concerning possible academic involvement in the project. Mariusz Olko: take responsibility for the multi-lingual aspects of e-{\TeX} and NTS; discuss the possibility of siting the NTS programming team in Poland; discuss the possibility of academic involvement with leading Polish academics. Bernd Raichle: endeavour to get {\TeX}--{\XeT} integrated into the standard UNIX distribution; prepare a list of proposed extensions to e-{\TeX}; lead discussions on NTS-L. Friedhelm Sowa: primary responsibility for finance; prepare proposals for a unified user interface and for unification of the integration of graphics; liaise with the Czech/Slovak groups concerning possible siting of the NTS programming team in the Czech Republic or Slovakia; discuss possible academic involvement with leading academics. Philip Taylor: Overall technical responsibility for all aspects of the project; liaise with other potential NTS core group members; prepare and circulate a summary of the decisions of this and future meetings. Philip Taylor, 09-NOV-1993 14:02:03 ======================================================================== Date: Mon, 15 Nov 93 13:30:44 +0100 Reply-To: "NTS-L Distribution list" From: Bernd Raichle Subject: Re: suggestions for NTS-L In-Reply-To: spqr@FTP.TEX.AC.UK's message of Tue, 9 Nov 93 22:03:19 GMT <9311092201.AA27351@ifi.informatik.uni-stuttgart.de> On Tue, 9 Nov 93 22:03:19 GMT, spqr@FTP.TEX.AC.UK said: spqr> [...] Can i ask the shadowy NTS team (by which mean, who spqr> is it apart from Phil?) to come back with some reactions on the *type* of spqr> requests coming in? The following opinions are mine, which can be different from the opinions of the rest of the "shadowy NTS team". My try to classify the type of extension suggestions w.r.t. their e-TeX usability: a) ideas b) ideas, usable and useful for typesetting & an input language c) suggestions out of the scope of e-TeX (because they will mean a total reimplementation of TeX.web, which is subject of NTS) d) suggestions for e-TeX e) concrete descriptions of TeX extensions for e-TeX f) complete changefiles for TeX.web (preferred ;-) All of these suggestions are useful, because they show specific deficits of TeX, but suggestions which are really needed are of type d), e), or f). Suggestions of type b) and c) are useful, too, because they can be used to choose and select between the bunch of d)-e). Until now most of the suggestions for e-TeX were of type a)-d), because the difference between NTS and e-TeX was unclear (or has been ignored!). The "perfectly-formed gems" I have posted are the first try to focus the discussion to these types of suggestions. What's needed for the start of e-TeX are descriptions of TeX extensions, which are 1) useful, 2) needed, 3) can be implemented in short time, 4) within the framework of TeX.web (with all its restrictions and flaws), ... The best suggestions are descriptions how to do something! ...not which things must be changed without the description of what's the needed behaviour. And it's also not useful to say "unlimit all limits", because all things are limited ...and for e-TeX we have to live with TeX (written in Pascal-WEB) and all the limitations of this implementation language. spqr> The reason I bring this up is because Bernd's ideas are so precise, so spqr> reasonable, and yet (to me) so esoteric. I am a sure they are needed, but spqr> it may be years before I use them.... It's not necessary that you or any normal TeX user will use them, but if extensions can be used to simplify things (or make things possible) for macro writers, the normal user has advantages, too. spqr> I wonder could NTS, when it starts, spqr> plan a series of sexy features, as well as these small wonders? To keep spqr> the punters interested? Which "sexy features"?? (any pictures of them?? gif or jpeg preferred! ;-) Describe them, please. -Bernd Raichle ======================================================================== Date: Mon, 15 Nov 93 17:04:31 +0100 Reply-To: "NTS-L Distribution list" From: Bernd Raichle Subject: Re: Questions and thoughts In-Reply-To: Nicolas Jungers's message of Tue, 9 Nov 93 13:27:28 CET <9311091231.AA28660@ifi.informatik.uni-stuttgart.de> Nicolas Jungers said on Tue, 9 Nov 93 13:27:28 CET: NJ> - Is there a GOOD reason to use _explicit_ kern with the accent primitive NJ> (except to inhibate hyphenation)? [..] BR> An `\"{a}' (using the \accent primitive) produces the following sequence: BR> kern `"' kern `a' BR> If a font with these kerning is used, the input `"a' can produce: BR> kern `"' kern `a' BR> Now TeX has to distinguish between these two sequences when BR> hyphenating words. NJ> OK, but: NJ> The first sequence is a valid characters in a word, hence hyphenation must NJ> be allowed. Hence the priority is to allow hyphenation of accented letters, NJ> and after, find a mechanism able to discriminate patterns matching accented NJ> letters and patterns matching row of chararcters. I don't think that can be NJ> easy, but accented letters are a widespread reality. The first sequence (i.e., \"{a}}) is _not_ a valid character in a word. It specifies a simple overprinting of glyphs, not a single character of a word. A human reader will then recognize the overprint of the two glyphs as one, but not TeX. IMHO what's really needed is something like a (device independent) virtual font mechanism in TeX. The input of \"a should be mapped to a description in an intermediate format, e.g. `Aumlaut' in current active text font. This intermediate structures are then mapped to the really used fonts and if the glyph doesn't exist, it can be possible to specify the construction of this glyph out of other glyphs (as is done using a virtual font now) using font dependent parameters (e.g. for special placement of accents like the ogonek). In the example an `Aumlaut' glyph can be constructed using the `umlaut accent' glyph and the `A' glyph. Michael Ferguson's ML-TeX implements this partially with the only ability to construct a glyph out of two other glyphs using TeX's \accent primitive internally. In a short summary: It's the wrong way to construct a sequence of overprinted glyphs and then try to "discriminate patterns matching accented letters" from the constructed sequence. It will be better to specify one item in an internal objects as an accented letter and if it will be necessary (e.g. no such glyph in the font) use a construction of glyphs to typeset this accented letter. Bernd Raichle ======================================================================== Date: Mon, 15 Nov 93 17:14:10 +0100 Reply-To: "NTS-L Distribution list" From: Bernd Raichle Subject: Re; Questions and thoughts In-Reply-To: Tim Bradshaw's message of Tue, 9 Nov 93 12:49:21 GMT <9311091251.AA01880@ifi.informatik.uni-stuttgart.de> Tim Bradshaw said on Tue, 9 Nov 93 12:49:21 GMT: TB> * Nicolas Jungers wrote: > That's definitively not what i want. I want to point that TeX is using > internally _two_ different way do make the same thing. My wish is: > "use or not the math copro, but do it consistently". Now, urban legend > claims that "TeX don't use a math copro", and it's almost true. > Imagine the difficulties of a lambda user of TeX, with TeX itself > claiming that everything is fine. Sure there is no absolute workaround > for this problem, but i still find my wish reasonable. TB> If what you mean is `use or don't use native floating-point' then I TB> think that's reasonable, and I think that probably the sensible answer TB> is `don't use'. However any system written in a high-level langauge TB> can never have any control over whether or not to use a maths TB> coprocessor, even on systems where the term is meaningful. This is another (minor) point in the e-TeX extension list: use fixed point arithmetic only instead of the floating point routines in some places (e.g. accent placement, glue setting). Has someone already extended Knuth's program "Fixed-Point Glue Setting" [TUGboat 3,1 (March 1982), 10--27] in such a way that it can be used in TeX? Volunteers!?!? Bernd Raichle ======================================================================== Date: Mon, 15 Nov 93 17:16:12 CET Reply-To: "NTS-L Distribution list" From: Nicolas Jungers Subject: double use of "linepenalty I think that's a reltively easy problem. I one case, i have needed to very carefully tune the different parameters for line breaking and page breaking. A that time i encounter the problem that "linepaenalty was used in the two algorithm. My suggestion: separate the two use in e-tex. I think that's a minor point of the whole problem of line/page breaking flaws. But it's seems so easy. Nicolas Jungers anorsu at vm1.rice.ucl.ac.be ======================================================================== Date: Mon, 15 Nov 93 17:39:17 +0100 Reply-To: "NTS-L Distribution list" From: Bernd Raichle Subject: Re: Proposal: \mathspacing made available and changeable In-Reply-To: P.Taylor@RHBNC.AC.UK's message of Tue, 9 Nov 93 18:07:12 GMT <9311091818.AA21931@ifi.informatik.uni-stuttgart.de> P.Taylor@RHBNC.AC.UK said on Tue, 9 Nov 93 18:07:12 GMT: >>> Add a new count register array \mathspacing with 64 entries (we really >>> need only 64-8=56 entries, because some of them are never used, but to >>> simplify things 64 are used) with the following syntax: >>> \mathspacing = % 0 <= <= 63 >>> The spacing specified in number is inserted between the two >>> math atom types specified in number . The two numbers are coded >>> as [...] PT> But there are some unposed (and therefore unanswered) PT> questions which I think need addressing: PT> 1) Is the register write-only or read/write; if read/write, are the PT> elements accessed in a manner analogous to \fontdimen? The spacing is not font dependent (or is it?), therefore it's unnecessary to put the registers in to the |font_info| area. I have thought of them as a set of "normal" count (= integer) registers, which can be read (e.g. \count255=\mathspacing0 ) and set to specific values, the values are restored after a group and assigments to these registers are local until you prefix the assignments with \global. PT> 2) What is the scope of a change to (an element of) \mathspacing? PT> If I change it within a nested group within maths mode, will PT> the value at the end of the group be used throughout the group, PT> or will the value obtaining at the maths-off determine the spacing PT> throughout the maths list? TeX's math mode is a very special mode, because almost all math parameters can be changed throughtout a math formula with no effect and only the values at the maths-off determine the result. Reason: a math formula is typeset in more than one passes. In the first pass the complete math formula is read and saved in a `mlist' without any spacing between the formula parts. Now when we are at the math-off and the complete formula is read, the spacing is determined between the (at this moment) well-determined types of the formula parts. And in this moment (remember: we are at a math-off) TeX reads the necessary math parameters. It's another item on the list of extension to extend this approach and to add the possibility to changed parameters in a nested group, but I don't want to mix two different things in my original proposal. Answer to Phil's question: the value obtaining at the maths-off will determine the spacing throughout the maths list (the pragmatic approach). Remember: TeX uses only five different spaces between math nodes: no space, conditional thin space (no space in script/scriptscript style) thin space conditional medium space (no space in script/scriptscript style) conditional thick space (no space in script/scriptscript style) With the sketched \mathspacing extension it is possible to specify different spaces for all four math styles, e.g. insert no space in scriptscript, a thin space in script, and a medium space in display and text style. Bernd Raichle ======================================================================== Date: Mon, 15 Nov 93 17:48:35 +0100 Reply-To: "NTS-L Distribution list" From: Bernd Raichle Subject: Re: double use of "linepenalty In-Reply-To: Nicolas Jungers's message of Mon, 15 Nov 93 17:16:12 CET <9311151625.AA20395@ifi.informatik.uni-stuttgart.de> Nicolas Jungers said on Mon, 15 Nov 93 17:16:12 CET: NJ> I think that's a reltively easy problem. NJ> I one case, i have needed to very carefully tune the different parameters NJ> for line breaking and page breaking. A that time i encounter the problem NJ> that "linepaenalty was used in the two algorithm. NJ> My suggestion: separate the two use in e-tex. I think that's a minor point NJ> of the whole problem of line/page breaking flaws. But it's seems so easy. ??? \linepenalty is only used to compute the demerits of the box containing the line, i.e., used only for line breaking. The parameter for page breaking is \interlinepenalty and it is only inserted between vertically stacked material. ==> separated in e-TeX version 0.0, available as TeX version 3.x ;-) Bernd Raichle ======================================================================== Date: Mon, 15 Nov 93 11:53:00 CST Reply-To: "NTS-L Distribution list" From: "FREDDIE W. NIX JR." Subject: Re: suggestions for NTS-L Hey! COULD SOMEONE PLEASE HELP ME GET OFF THIS LISTSERV...i HAVE NO IDEA HOW TO DO IT!!!! hELP! ======================================================================== Date: Mon, 15 Nov 93 20:15:00 +0200 Reply-To: "NTS-L Distribution list" From: KNAPPEN@VKPMZD.KPH.UNI-MAINZ.DE Subject: \textcode again To: nts-l <@mvs.gsi.de:nts-l@dhdurz1.bitnet> Unfortunately the mail here was somewhat unreliable during the last week, therefore some of the following comment may come late. I wrote: > \etextcode = :::: > where I propose for the meaning of the numbers the following: > directionality (none implied/ left-right/ right-left/ top-bottom/ > text uppercase code (see below) > text lowercase code > fontencoding (some kind of textfam) > location in font. Bernd Raichle answered: :My opinions: : :Put these informations in the font. : * The should be specified in the font. : * Use symbolic names to specify a position of a glyph : in the font. : * Uppercase/lowercase characters are font dependent : (if we assue that different "languages" use different fonts). : * The directionality is font dependent. : Here I disagree with the general strategy, allthough we might meet well in working out some scheme. I consider putting intelligence `as an exercise' into the fonts the wrong approach. This approach leaves us with very few math fonts for TeX, while there are thousands of rather `stupid' text fonts around. Of course, to get the output glyphs right, TeX has do know or to do some educated guess, how the fonts are organised and what they do include. If this guess fails (a good example is \cal in the OFSS) weird output occurs. I think it is easier to write some (e)TeX macros to use a given font then to customise a given font to the needs of (e)TeX -- if this is possible at all; think of built-in fonts of non-postscript printing devices. An (e)TeX macro package might exploit the fontencoding information provided by the tfm file and apply the necessary switches. I do not understand, how a reference by a symbolic name should work out. TeX has its own symbolic names like \ss, \l, \oe, \^o, and they are worked out behind the screens to some glyph encodings. There is some control over the \uppercasing and \lowercasing of those characters. :!!! Distinguish between the font glyphs and the character codes :!!! we use to input a text. Yes. The question is where to interface. Shall the fonts know about some names provided in etfm-fies or vf-files as specials, and TeX looks up these names, or shall there be a part of TeX which maps the symbolic TeX names to fontencoding dependent numbers. I go for the latter. :The directionality of a glyph doesn't depend on the character(s) we :use to transcribe it in the input stream. We can use the same :character `a' to input an english text or an arabic text. :The output (glyphs) and the directionality depend on the font. Yes they do. But again, who should know about this, the font or a (e)TeX macro package. And there are special cases, like japanese, where the directionality may be a question of context and/or document design. A short japanese insert in a european text will be left-to-right, but a standaloe japanese text probably top-to-bottom with right-to-left columns. --J"org Knappen. ======================================================================== Date: Mon, 15 Nov 93 20:29:00 +0200 Reply-To: "NTS-L Distribution list" From: KNAPPEN@VKPMZD.KPH.UNI-MAINZ.DE Subject: \afterinput Here is another primitive I'm thinking about: \afterinput which inserts the following command after the next \input is executed. To print a file verbatim, the option could then be as easy as \afterinput\beginverbatim \input file-to-be-printed-verbatim \endverbatim Maybe it could also be utilisedto device some macros cutting junk from the beginning of a file (like mail headers). But I don't know about the implications of this one. --J"org Knappen. ======================================================================== Date: Mon, 15 Nov 93 20:38:48 CET Reply-To: "NTS-L Distribution list" From: Joachim Schrod Subject: Re: \afterinput In-Reply-To: <199311151935.AA44658@rs3.hrz.th-darmstadt.de> from"KNAPPEN@VKPMZD.KPH.UNI-MAINZ.DE" at Nov 15, 93 08:29:00 pm You wrote: > > Here is another primitive I'm thinking about: > > \afterinput > which inserts the following command after the next \input is executed. To > print a file verbatim, the option could then be as easy as > > \afterinput\beginverbatim > \input file-to-be-printed-verbatim > \endverbatim That's implementable in good ol' TeX. Joachim -- =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Joachim Schrod Email: schrod@iti.informatik.th-darmstadt.de Computer Science Department Technical University of Darmstadt, Germany ======================================================================== Date: Mon, 15 Nov 93 18:56:00 +0200 Reply-To: "NTS-L Distribution list" From: KNAPPEN@VKPMZD.KPH.UNI-MAINZ.DE Subject: Re: \mathspacing I want to strongly second Bernd's proposal of \mathspacing. Allthough the desired effect (a tighter or looser setting of math) could be achieved by creating highly specialised fonts, even more specialised then TeX fonts usually are, it is the right way to attack this problem. And it will be appreciated by the publishing industry, I think. The parameters are accesible in TeX82 in a very indirect way; they are determined by the additional fontmetrics of \textfont2 \textfont3 and their \script- and \scriptscript-companions. This intricate depency is one of the great barriers in designing new math fonts -- you cannot just provide the glyphs and then a TeX macro package, but you have to tune those tricky parameters. The proposed syntax of \mathspacing is not very easy, but maybe it should not be made too easy to produce bad typesetting. But I rather like to have full controll over the spacing, i.e. the freedom to specify them in units of my choice instead of recurring to a few predefined values. --J"org Knappen ======================================================================== Date: Mon, 15 Nov 93 20:19:00 +0200 Reply-To: "NTS-L Distribution list" From: KNAPPEN@VKPMZD.KPH.UNI-MAINZ.DE Subject: Weird idea ? This may be a rather weird proposal... \goto{Harmfull} and \gotolabel{Harmfull} Why not? If the \gotolabel is already defined \goto goes back to last occurence of it, if not it peaks ahead until it finds the appropriate label. This allows: * More transparent \loop structures * Section of comment to be skipped easily and fast, without caring for balanced braces and/or \if...\else...\fi s. \goto could be expandable as \if \fi, and handle anything inside in the same way. --J"org Knappen ======================================================================== Date: Mon, 15 Nov 93 20:38:00 +0200 Reply-To: "NTS-L Distribution list" From: KNAPPEN@VKPMZD.KPH.UNI-MAINZ.DE Subject: Some special characters Here are some mixed minor thoughts Make all hard-wired meanings of some input characters freely definable and switchable. At the moment, the following come to my mind: \item[`] \alphabeticchar (starts a character constant, like in \catcode`\@\active) \item['] \octalchar (starts an octal number, like '777) \item["] \hexchar (starts those notorious hex numbers, like "F8) \item[.] \decimalseparator (in europe the comma is used as a decimal separator) \item[-] ... which is in fact an everbreakable hyphen, has lots of strange r\^oles in the hyphenation process, most of which are not handled by the \hyphenchar. It is the only character that may indicate break points in the \hyphenation{com-mand} Should they handled like \hyphenchar (which means, that there can always be only one representant for each function, but the same character can have more then one function) or as new \catcode choices (which means that they cannot share functions, but there can be more then one \decimalseparator at one time) ? --J"org Knappen. ======================================================================== Date: Tue, 16 Nov 93 14:13:24 +1100 Reply-To: "NTS-L Distribution list" From: ecsgrt@LUXOR.LATROBE.EDU.AU Subject: Re: Questions and thoughts Regarding composite characters: Bernd speaks of overlaying `glyphs'. Since the problem is with TeX, I think that this primarily means the TFM characters that TeX reads. % IMHO what's really needed is something like a (device independent) % virtual font mechanism in TeX. Would Bernd please expound this suggestion? % The input of \"a should be mapped to a description in an intermediate % format, e.g. `Aumlaut' in current active text font. Where does this intermediate format reside? I'm thinking that e-TeX itself should represent the Aumlaut as a single object. It has long seemed to me that [e-]TeX should be able to construct composite objects, by boxes for example, specify their TFM character- like typesetting properties, and then typeset those composites in exactly the same way as it typesets TFM characters. That leaves the problem of specifying how to hyphenate words containing composites. e-TeX must think `this is the letter "Aumlaut"', hyphenate accordingly, and only later concern itself with how the Aumlaut is composed. If I understand Bernd correctly, this is consistent with his proposal. Comments? Would someone give a comparison of the mechanisms of ML-TeX's \accent and TeX's? Then explain the remaining shortcomings of ML-TeX's method? As stated above, I favor a more general composition of characters (with accents as a particular case), in which e-TeX treats (properly described) composites as `first-class' characters. For this, I don't think we need to write any new TFM or `VF' files, or change any old ones. Do it all in e-TeX macros. Geoffrey Tobin ======================================================================== Date: Tue, 16 Nov 93 17:43:49 +0100 Reply-To: "NTS-L Distribution list" From: Bernd Raichle Subject: Re: Weird idea ? In-Reply-To: KNAPPEN@VKPMZD.KPH.UNI-MAINZ.DE's message of Mon, 15 Nov 93 20:19:00 +0200 <9311160158.AA14774@ifi.informatik.uni-stuttgart.de> KNAPPEN@VKPMZD.KPH.UNI-MAINZ.DE said on Mon, 15 Nov 93 20:19:00 +0200: Joerg> This may be a rather weird proposal... Joerg> Joerg> \goto{Harmfull} and \gotolabel{Harmfull} Joerg> Why not? If the \gotolabel is already defined \goto goes back to last Joerg> occurence of it, if not it peaks ahead until it finds the appropriate Joerg> label. I have waited for Joachim to say something about tokens, but it seems that I have to do it. You propose something like \goto{} and \gotolabel{} and if are equal with , then the \goto will go to the \gotolabel ignoring all tokens between. For the equality of this token lists, did we expand them, i.e., \edef\foo{} \edef\bar{} \ifx\foo\bar ... or leave them unexpanded, i.e., \def\foo{} \def\bar{} \ifx\foo\bar ... or some other equality between the two token lists??? Joerg> [...] If the \gotolabel is already defined \goto goes back [...] ^^^^^^^^^ TeX can't go back to a certain place in the input stream (input stream = token list or file input), because this stream is a flow of tokens with only _one_ direction... When TeX goes back (or better: when it seems that TeX goes back), it's done in a very simple way: save the loop contents in a token list and insert this token list again and again and ... until some test decides that's enough. > This allows: > * More transparent \loop structures ??? true, if we can't go back? > * Section of comment to be skipped easily and fast, without caring for > balanced braces and/or \if...\else...\fi s. ^^^^^^^^^^^^^^^ The skipped text in an \if...\else...\fi can contain unbalanced braces. Bernd Raichle ======================================================================== Date: Tue, 16 Nov 93 17:59:50 +0100 Reply-To: "NTS-L Distribution list" From: Bernd Raichle Subject: Re: Some special characters In-Reply-To: KNAPPEN@VKPMZD.KPH.UNI-MAINZ.DE's message of Mon, 15 Nov 93 20:38:00 +0200 <9311160158.AA14791@ifi.informatik.uni-stuttgart.de> KNAPPEN@VKPMZD.KPH.UNI-MAINZ.DE said on Mon, 15 Nov 93 20:38:00 +0200: Joerg> Here are some mixed minor thoughts Joerg> Joerg> Make all hard-wired meanings of some input characters freely definable and Joerg> switchable. At the moment, the following come to my mind: Agreed. [...] Joerg> \item[.] \decimalseparator (in europe the comma is used as a decimal Joerg> separator) You can already use a comma or a point for the decimal separator (e.g. \dimen0=12,34pt), but it's not used in the output. Btw. the point is also used inside \patterns{} to specify a word boundary. Joerg> \item[-] ... which is in fact an everbreakable hyphen, has lots of Joerg> strange r\^oles in the hyphenation process, most of which are Joerg> not handled by the \hyphenchar. Words with explicit hyphens can be broken, if you set \hyphenchar\font != `\- before reading the word (and reset it before the next \par), because TeX adds a discretionary node after the current \hyphenchar (in unrestricted horizontal mode only). Joerg> It is the only character that may indicate break points in the Joerg> \hyphenation{com-mand} Agreed, but ... I think that the hyphenation part of TeX needs a lot of rework before (even minor) things should be changed and declared as a e-TeX feature for all time. Joerg> Should they handled like \hyphenchar (which means, that there can always be Joerg> only one representant for each function, but the same character can have \hyphenchar is not a good example, because it has one representant for each font; use \newlinechar... Joerg> more then one function) or as new \catcode choices (which means that they Joerg> cannot share functions, but there can be more then one \decimalseparator at Joerg> one time) ? I don't know which one should be preferred for a type of special function. Bernd Raichle ======================================================================== Date: Tue, 16 Nov 93 20:03:06 +0100 Reply-To: "NTS-L Distribution list" From: KNAPPEN@VKPMZD.KPH.UNI-MAINZ.DE Subject: Re: Weird idea ? I must admit, that implicitly I assumed the token list of \goto and \gotolabel not to be expanded. However, an expandable version (\egoto) may be interesting. The ability of going back is crucial, otherwise there is not many use of \goto. I thought of a machanism like the following: If eTeX encounters a \gototlabel, it memorises the current input file and the inputline number and jumps back to that place. It should not memorise all the token encountered in between. Ah, this gives restrictions: A \gotolabel may not be entered from the terminal interactively (it is just ignored then, ad a warning is issued). It shouldn't be hidden in token streams other then input files. probably, a \gotolabel should also become unusable after TeX closes the parenthesis of an input file (and an !inputfile closed, label no longer available warning is issued). Or the strongest restriction: the \gotolabel must occur in the same file as the corresponding \goto (rather safe assumption). --J"org Knappen. ======================================================================== Date: Tue, 16 Nov 93 19:18:31 GMT Reply-To: "NTS-L Distribution list" From: Tim Bradshaw Subject: Re: Weird idea ? In-Reply-To: <9311161911.aa27333@uk.ac.ed.castle> * KNAPPEN wrote: (About adding gotos to TeX) If you want looping constructs of a more sophisticated kind wouldn't it be more sensible to add *them* rather than gotos? --tim ======================================================================== Date: Tue, 16 Nov 93 22:22:36 CET Reply-To: "NTS-L Distribution list" From: Joachim Schrod Subject: Re: Weird idea ? In-Reply-To: <199311160159.AA61359@rs3.hrz.th-darmstadt.de> from"KNAPPEN@VKPMZD.KPH.UNI-MAINZ.DE" at Nov 15, 93 08:19:00 pm You wrote: > > This may be a rather weird proposal... > > \goto{Harmfull} and \gotolabel{Harmfull} A forward goto is an \iffalse. What is a backward goto in a macro expansion language? (Only the very first token in the token stream is ever executed. Execution in such a language means that this token disappears.) Joachim -- =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Joachim Schrod Email: schrod@iti.informatik.th-darmstadt.de Computer Science Department Technical University of Darmstadt, Germany ======================================================================== Date: Tue, 16 Nov 93 22:33:16 CET Reply-To: "NTS-L Distribution list" From: Joachim Schrod Subject: Re: Weird idea ? In-Reply-To: <199311161651.AA21984@rs3.hrz.th-darmstadt.de> from "Bernd Raichle"at Nov 16, 93 05:43:49 pm You wrote: > > KNAPPEN@VKPMZD.KPH.UNI-MAINZ.DE said on Mon, 15 Nov 93 20:19:00 +0200: > Joerg> This may be a rather weird proposal... > Joerg> > Joerg> \goto{Harmfull} and \gotolabel{Harmfull} > Joerg> Why not? If the \gotolabel is already defined \goto goes back to last > Joerg> occurence of it, if not it peaks ahead until it finds the appropriate > Joerg> label. > > I have waited for Joachim to say something about tokens, but it seems > that I have to do it. I'm about to leave for a holiday... (Schifoan!) But when you want to trigger me: The other proposal (about the special characters) does not concern characters at all. You determine tokens with it. I.e.: the number specification '33 is the token list ((other . ') (other . 3) (other . 3)). So you want to determine what token will introduce an octal number... (Besides that nit-picking, I'm fine with the proposal. :-) Joachim -- =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Joachim Schrod Email: schrod@iti.informatik.th-darmstadt.de Computer Science Department Technical University of Darmstadt, Germany ======================================================================== Date: Wed, 17 Nov 93 12:23:24 +0100 Reply-To: "NTS-L Distribution list" From: Stephan Lehmke Subject: Re: Weird idea ? Joachim Schrod wrote: > You wrote: > > > > This may be a rather weird proposal... > > > > \goto{Harmfull} and \gotolabel{Harmfull} > > A forward goto is an \iffalse. > > What is a backward goto in a macro expansion language? (Only the very > first token in the token stream is ever executed. Execution in such a > language means that this token disappears.) > > Joachim As far as I see, the proposed \goto was not meant to be implementable in the language TeX, as \loop is (I think), but to be a new primitive that sort of `rewinds' the input focus of eTeX itself on the very input file. If or how this is possible I don't know, but as far as I see, it has nothing to do with macro expansion... It just means the input pointer on the current file `jumps'. I fear this might heavily collide with the macro expansion mechanism, though, creating all sorts of side effects... > Joachim Schrod Stephan --- Stephan Lehmke lehmke@ls1.informatik.uni-dortmund.de I'm a student at the University of Dortmund, Germany. All opinions are mine. ======================================================================== Date: Fri, 19 Nov 93 17:55:34 +0100 Reply-To: "NTS-L Distribution list" From: Bernd Raichle Subject: Composite characters (was: Re: Questions and thoughts) In-Reply-To: ecsgrt@LUXOR.LATROBE.EDU.AU's message of Tue, 16 Nov 93 14:13:24 +1100 <9311160315.AA21928@ifi.informatik.uni-stuttgart.de> Geoffrey Tobin wrote on Tue, 16 Nov 93 14:13:24 +1100: Regarding composite characters: Bernd speaks of overlaying `glyphs'. Since the problem is with TeX, I think that this primarily means the TFM characters that TeX reads. % IMHO what's really needed is something like a (device independent) % virtual font mechanism in TeX. Would Bernd please expound this suggestion? A virtual font file in an `vf' file is a piece of dvi code for one character (it's possible to define a vf character giving a complete TeX page). TeX sees only one character, not the whole dvi/vf code, because it looks into the tfm file of the vf, never into the vf file. This means that instead of writing dvi code in symbolic form in a `vpl' file (either by hand or using programs) it should be possible to enhance TeX in such a way that you specify on the TeX macro level -- the necessary tfm info (ht, dp, wd, italic correction, ligatures, kerning) and -- pieces of normal TeX code (characters, fonts, boxes, rules, specials, kerns) to compose this character. A very simple example syntax of such an extension can be \newvirtualfont\foo make new virtual font assigned to symbol \foo. \newcharacter{"E4}\foo{{\tenrm \"a}}{\tenrm a}{\tenrm a} define new character on position "E4 in font \foo with the composition {\tenrm \"a} and the same behaviour on the left and right side as the character `a in font \tenrm. (For a real extension we should have more control over the ligature and kerning program.) % The input of \"a should be mapped to a description in an intermediate % format, e.g. `Aumlaut' in current active text font. Where does this intermediate format reside? I'm thinking that e-TeX itself should represent the Aumlaut as a single object. Yes, the "intermediate format" is TeX internal only and the `Aumlaut' should be a single object. It has long seemed to me that [e-]TeX should be able to construct composite objects, by boxes for example, specify their TFM character- like typesetting properties, and then typeset those composites in exactly the same way as it typesets TFM characters. That leaves the problem of specifying how to hyphenate words containing composites. e-TeX must think `this is the letter "Aumlaut"', hyphenate accordingly, and only later concern itself with how the Aumlaut is composed. If I understand Bernd correctly, this is consistent with his proposal. Comments? Yes. IMHO if the user types ``\"a'' or ``'' ( = one! letter) this should be defined in such a way that both are represented with an `Aumlaut' object and this object should be used for hyphenation. In the moment of typesetting this object (or when we need the dimensions of this object) it will be decided based on the current font, if there's a corresponding glyph in the font or if we have to compose it using other glyphs. Would someone give a comparison of the mechanisms of ML-TeX's \accent and TeX's? Then explain the remaining shortcomings of ML-TeX's method? As stated above, I favor a more general composition of characters (with accents as a particular case), in which e-TeX treats (properly described) composites as `first-class' characters. In ML-TeX almost all accent \", \', ... macros are redefined to produce the characters with the equivalent ISO Latin-1 codes (e.g. \"a expands to ^^e4). Further we can define character substitutions using the new primitive \charsubdef in the following way: \charsubdef `^^e4 = 127 `a If the user types \"a (or ^^e4), ML-TeX checks if the character glyph "E4 exists in the current font and if existing this font character glyph is used. If the character glyph doesn't exist and a \charsubdef-inition for this character is given, ML-TeX uses ^^e4 for hyphenation and the dimensions of the base character `a' for all internal computations. And if the dvi code for the character ^^e4 will be written to the dvi file, ML-TeX substitutes the character with the equivalent of {\accent 127 a} Bernd Raichle ======================================================================== Date: Mon, 22 Nov 93 12:30:26 CET Reply-To: "NTS-L Distribution list" From: Joachim Schrod Subject: Re: Weird idea ? In-Reply-To: <199311171126.AA04474@rs3.hrz.th-darmstadt.de> from "Stephan Lehmke" at Nov 17, 93 12:23:24 pm Stephan Lehmke wrote: > > Joachim Schrod wrote: > > You wrote: > > > > > > This may be a rather weird proposal... > > > > > > \goto{Harmfull} and \gotolabel{Harmfull} > > > > A forward goto is an \iffalse. > > > > What is a backward goto in a macro expansion language? (Only the very > > first token in the token stream is ever executed. Execution in such a > > language means that this token disappears.) > > > > Joachim > > As far as I see, the proposed \goto was not meant to be implementable in the > language TeX, I've understood this. > It just means the input pointer on the current file `jumps'. > > I fear this might heavily collide with the macro expansion mechanism, though, > creating all sorts of side effects... Exactly. Now you've found the point of my question. What is a ``back jump'' in terms of _this_ macro language. The problem is not a macro language per se, it's the question what to do with the existing token queue in the moment of jumping. (For instance, what is \expandafter\foo\goto{label}.) Btw, it doesn't really matter which answer is given to this question, the tracability [sp?] is lost even more than in the normal usage of gotos. (I hope that you, as a CS student, have read Dijkstra's letter; and have not only heard of it.) I.e.: I agree fully with Tim that \goto is the last thing we need in terms of a control structure. Higher-level constructs are better. Joachim -- =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Joachim Schrod Email: schrod@iti.informatik.th-darmstadt.de Computer Science Department Technical University of Darmstadt, Germany ======================================================================== Date: Wed, 24 Nov 93 11:57:40 +0100 Reply-To: Mike Piff From: Mike Piff Subject: File handling The \read command in TeX is very primitive. There is little control over how much is read. One is forced to read a whole line, or perhaps several lines if the input file contains grouping tokens. Suggestion: \readtoken should read just one token. (I would prefer \readchar, but no doubt that is against the spirit of TeX.) Perhaps others could suggest further useful primitive operations. (Hackers: Is there some subtlety of TeX that I have missed that allows \read to read one token at a time?) Mike Piff %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% Dr M J Piff %% e-mail: %% School of Mathematics and Statistics %% %% University of Sheffield %% M.Piff@sheffield.ac.uk %% Hicks Building %% %% SHEFFIELD S3 7RH %% Telephone: (0742) 824431 %% England %% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% ======================================================================== Date: Wed, 24 Nov 93 12:12:43 +0100 Reply-To: Mike Piff From: Mike Piff Subject: File handling 2 TeX is hampered by not having the ability to input from several files concurrently. Thus, if setting two languages in parallel, it would be useful to have some mechanism which allowed a transfer out of an \input file whilst remembering the current input pointer, and a mechanism to continue with that input at that pointer later. Suggestion: Input should be done on ``channel numbers''. When starting TeX it defaults to channel0 unless a file name is specified, when it defaults to channel1. Instead of \input xxx, use \input n xxx, although perhaps a default mechanism could be devised to allow n to be omitted for the documents that already exist. Maybe \cinput n xxx would be better. A mechanism could be provided to pause from channel n. \pause n would do. To continue with channel n, \continue n. To close n, \close n. Mike Piff %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% Dr M J Piff %% e-mail: %% School of Mathematics and Statistics %% %% University of Sheffield %% M.Piff@sheffield.ac.uk %% Hicks Building %% %% SHEFFIELD S3 7RH %% Telephone: (0742) 824431 %% England %% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% ======================================================================== Date: Tue, 23 Nov 93 23:08:00 +0200 Reply-To: "NTS-L Distribution list" From: KNAPPEN@VKPMZD.KPH.UNI-MAINZ.DE Subject: Re: Weird proposal ? The idea of a backjumping \goto is not as foreign to TeX as it might look on the first sight. For example the following construction: \gotolabel{harmfull} some tokens \ifsomething \goto{harmfull} \fi can be emulated (with serious drawbacks) in current TeX (!) in the following manner: \input harmless %harmless.tex %%%%%%%%%%% some tokens \ifsomething \input harmless \fi \endinput %/harmless.tex %%%%%%%%%% The serious drawbacks are the following: * There are only few iterations possible, until TeX bangs against the limit of input levels. * Each \input causes two strings to be defined, at least one of which (often both) are eternal. This means, the string memory is cluttered rapidly. So, what's wrong with \goto in this macro language and not wrong anywhere else ? -- J"org Knappen. ======================================================================== Date: Wed, 24 Nov 93 12:47:46 GMT Reply-To: "NTS-L Distribution list" From: Tim Bradshaw Subject: Re: Weird proposal ? In-Reply-To: <9311241220.aa15803@uk.ac.ed.castle> * KNAPPEN wrote: [I paraphrase brutally!] > The idea of a backjumping \goto is not as foreign to TeX as it might look > on the first sight. For example the following construction: > [You can do some kinds of innocuous goto with recursion] > The serious drawbacks are the following: > [Stacks get big and TeX has silly limits] > So, what's wrong with \goto in this macro language and not wrong anywhere > else ? I have no theoretical objections to goto in object languages: indeed I've spent quite a lot of time writing automata-based systems where goto is a useful (not viatal) construct in the object language. But surely everyone knows by now that goto is neither necessary nor desirable in languages for humans to write. Even a non computer-science type like me knows this. Let's stop beating this dead horse and worry about what should be added to stop people wanting goto. I suggest some more sophisticated looping constructs for one. If people really, really want goto in the language then I suggest that the `right' (i.e least bad) thing to add is continuations in the scheme sense which I think solves the problems Joachim mentioned (though I'm not sure whether they would be conceivable in a macro language like TeX). --tim ======================================================================== Date: Wed, 24 Nov 93 14:59:38 CET Reply-To: "NTS-L Distribution list" From: error sender I2 Message from DEADMAIL - returned Greybook mail follows: spqr@UK.AC.YORK.MINSTER Received: from UKACRL by UK.AC.RL.IB (Mailer R2.07) with BSMTP id 4432; Wed, 24 Nov 93 12:12:01 GMT Received: from UKACRL by UKACRL.BITNET (Mailer R2.07) with BSMTP id 5954; Wed, 24 Nov 93 12:12:01 GMT Date: Wed, 24 Nov 93 12:12:43 +0100 Reply-To: Mike Piff Sender: "NTS-L Distribution list" From: Mike Piff Subject: File handling 2 X-To: "NTS-L Distribution List" To: Multiple Recipients of TeX is hampered by not having the ability to input from several files concurrently. Thus, if setting two languages in parallel, it would be useful to have some mechanism which allowed a transfer out of an \input file whilst remembering the current input pointer, and a mechanism to continue with that input at that pointer later. Suggestion: Input should be done on ``channel numbers''. When starting TeX it defaults to channel0 unless a file name is specified, when it defaults to channel1. Instead of \input xxx, use \input n xxx, although perhaps a default mechanism could be devised to allow n to be omitted for the documents that already exist. Maybe \cinput n xxx would be better. A mechanism could be provided to pause from channel n. \pause n would do. To continue with channel n, \continue n. To close n, \close n. Mike Piff %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% Dr M J Piff %% e-mail: %% School of Mathematics and Statistics %% %% University of Sheffield %% M.Piff@sheffield.ac.uk %% Hicks Building %% %% SHEFFIELD S3 7RH %% Telephone: (0742) 824431 %% England %% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% ======================================================================== Date: Wed, 24 Nov 93 15:01:57 CET Reply-To: "NTS-L Distribution list" From: error sender I2 Message from DEADMAIL - returned Greybook mail follows: spqr@UK.AC.YORK.MINSTER Received: from UKACRL by UK.AC.RL.IB (Mailer R2.07) with BSMTP id 6725; Wed, 24 Nov 93 12:49:31 GMT Received: from UKACRL by UKACRL.BITNET (Mailer R2.07) with BSMTP id 7504; Wed, 24 Nov 93 12:49:31 GMT Date: Wed, 24 Nov 93 12:47:46 GMT Reply-To: "NTS-L Distribution list" Sender: "NTS-L Distribution list" From: Tim Bradshaw Subject: Re: Weird proposal ? In-Reply-To: <9311241220.aa15803@uk.ac.ed.castle> To: Multiple Recipients of * KNAPPEN wrote: [I paraphrase brutally!] > The idea of a backjumping \goto is not as foreign to TeX as it might look > on the first sight. For example the following construction: > [You can do some kinds of innocuous goto with recursion] > The serious drawbacks are the following: > [Stacks get big and TeX has silly limits] > So, what's wrong with \goto in this macro language and not wrong anywhere > else ? I have no theoretical objections to goto in object languages: indeed I've spent quite a lot of time writing automata-based systems where goto is a useful (not viatal) construct in the object language. But surely everyone knows by now that goto is neither necessary nor desirable in languages for humans to write. Even a non computer-science type like me knows this. Let's stop beating this dead horse and worry about what should be added to stop people wanting goto. I suggest some more sophisticated looping constructs for one. If people really, really want goto in the language then I suggest that the `right' (i.e least bad) thing to add is continuations in the scheme sense which I think solves the problems Joachim mentioned (though I'm not sure whether they would be conceivable in a macro language like TeX). --tim ======================================================================== Date: Wed, 24 Nov 93 15:33:50 CET Reply-To: "NTS-L Distribution list" From: error sender I2 Message from DEADMAIL - returned Greybook mail follows: spqr@UK.AC.YORK.MINSTER Received: from UKACRL by UK.AC.RL.IB (Mailer R2.07) with BSMTP id 4407; Wed, 24 Nov 93 12:11:34 GMT Received: from UKACRL by UKACRL.BITNET (Mailer R2.07) with BSMTP id 5940; Wed, 24 Nov 93 12:11:34 GMT Date: Wed, 24 Nov 93 11:57:40 +0100 Reply-To: Mike Piff Sender: "NTS-L Distribution list" From: Mike Piff Subject: File handling X-To: "NTS-L Distribution List" To: Multiple Recipients of The \read command in TeX is very primitive. There is little control over how much is read. One is forced to read a whole line, or perhaps several lines if the input file contains grouping tokens. Suggestion: \readtoken should read just one token. (I would prefer \readchar, but no doubt that is against the spirit of TeX.) Perhaps others could suggest further useful primitive operations. (Hackers: Is there some subtlety of TeX that I have missed that allows \read to read one token at a time?) Mike Piff %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% Dr M J Piff %% e-mail: %% School of Mathematics and Statistics %% %% University of Sheffield %% M.Piff@sheffield.ac.uk %% Hicks Building %% %% SHEFFIELD S3 7RH %% Telephone: (0742) 824431 %% England %% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% ======================================================================== Date: Wed, 24 Nov 93 15:47:38 CET Reply-To: "NTS-L Distribution list" From: error sender I2 Message from DEADMAIL - returned Greybook mail follows: spqr@UK.AC.YORK.MINSTER Received: from UKACRL by UK.AC.RL.IB (Mailer R2.07) with BSMTP id 4921; Wed, 24 Nov 93 12:19:30 GMT Received: from UKACRL by UKACRL.BITNET (Mailer R2.07) with BSMTP id 6217; Wed, 24 Nov 93 12:19:29 GMT Date: Tue, 23 Nov 93 23:08:00 +0200 Reply-To: "NTS-L Distribution list" Sender: "NTS-L Distribution list" From: KNAPPEN@DE.UNI-MAINZ.KPH.VKPMZD Subject: Re: Weird proposal ? To: Multiple Recipients of The idea of a backjumping \goto is not as foreign to TeX as it might look on the first sight. For example the following construction: \gotolabel{harmfull} some tokens \ifsomething \goto{harmfull} \fi can be emulated (with serious drawbacks) in current TeX (!) in the following manner: \input harmless %harmless.tex %%%%%%%%%%% some tokens \ifsomething \input harmless \fi \endinput %/harmless.tex %%%%%%%%%% The serious drawbacks are the following: * There are only few iterations possible, until TeX bangs against the limit of input levels. * Each \input causes two strings to be defined, at least one of which (often both) are eternal. This means, the string memory is cluttered rapidly. So, what's wrong with \goto in this macro language and not wrong anywhere else ? -- J"org Knappen. ======================================================================== Date: Wed, 24 Nov 93 15:38:08 +0100 Reply-To: Mike Piff From: Mike Piff Subject: Re: File handling Michael Downes writes: %>A version of \readchar that should be satisfactory for your purposes %>is do-able in TeX 3.x: %> %>% Usage: \readchar\infile to\nextchar %>% %>% Reads one line at a time (always, because {} are deactivated) and %>% stores it in \readbuffer, then distributes the contents of %>% \readbuffer one character at a time upon request. %>% %>\def\readchar#1to#2{% %> \ifx\readbuffer\empty %> \ifeof#1\errmessage{End of file, cannot read another char}% %> \else %> \begingroup %>% Deactive all special characters while reading %> \def\do##1{\catcode`##1=12 }\dospecials %>% etc OK, so this reads a character at a time, but the problem still remains to read a token at a time. One could of course read the next chunk of the file--- possibly large if the whole file is surrounded by {\bf...} say---and then parse a token off. However, the effect would not be as desired, eg, \def\makebarcontrol{\catcode`\|=0\relax} \catcode`\|=11 \readtoken\infile to\temp \temp \readtoken\infile to\temp File contains: \makebarcontrol|abc ... \temp should now contain the token \abc, not the token |. Mike Piff %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% Dr M J Piff %% e-mail: %% School of Mathematics and Statistics %% %% University of Sheffield %% M.Piff@sheffield.ac.uk %% Hicks Building %% %% SHEFFIELD S3 7RH %% Telephone: (0742) 824431 %% England %% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% ======================================================================== Date: Thu, 25 Nov 93 10:13:15 +0100 Reply-To: Mike Piff From: Mike Piff Subject: Automatic italic correction Judging by the discussions going on on the LaTeX-L list, and the problems involved in getting that to work using macros, despite some gallant attempts, I would say that automatic italic correction should be a *must* for e-TeX, and indeed for TeX. Anyone agree? Mike Piff %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% Dr M J Piff %% e-mail: %% School of Mathematics and Statistics %% %% University of Sheffield %% M.Piff@sheffield.ac.uk %% Hicks Building %% %% SHEFFIELD S3 7RH %% Telephone: (0742) 824431 %% England %% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% ======================================================================== Date: Thu, 25 Nov 93 14:30:12 +0100 Reply-To: "NTS-L Distribution list" From: Bernd Raichle Subject: Re: Automatic italic correction In-Reply-To: Mike Piff's message of Thu, 25 Nov 93 10:13:15 +0100 <9311250931.AA06113@ifi.informatik.uni-stuttgart.de> > Anyone agree? I agree. Can somebody try to describe an algorithm for an automatic italic correction, e.g. if font and slant change if height(next character) < ... and slant > 0 then insert it-corr else if depth(next character) > ... and slant < 0 then insert it-corr etc.etc. or some other ideas how to decide if TeX should insert italic correction?? Remember, that fonts can have positive and negative slant and that you have to read the next character before you will insert an italic correction (e.g. " \rm {\it f}. " but " \rm {\it f}l " ). Bernd Raichle ======================================================================== Date: Thu, 25 Nov 93 14:59:04 +0100 Reply-To: "NTS-L Distribution list" From: Anselm Lingnau Subject: Re: Automatic italic correction In-Reply-To: (Your message of Thu, 25 Nov 93 14:30:12 N.) <9311251333.AA38402@gauss.math.uni-frankfurt.de> Bernd Raichle writes: > Remember, that fonts can have positive and negative slant and that you > have to read the next character before you will insert an italic > correction (e.g. " \rm {\it f}. " but " \rm {\it f}l " ). Shouldn't the insertion of italic corrections be deferred until (just before?) e.g. a paragraph is broken into lines? By that time, all the `next characters' (viz. their dimensions) will be available, and we don't have to peek ahead at the input. Anselm --- Anselm Lingnau .................................. lingnau@math.uni-frankfurt.de Liberty without learning is always in peril and learning without liberty is always in vain. --- John F. Kennedy ======================================================================== Date: Thu, 25 Nov 93 17:40:28 +0100 Reply-To: "NTS-L Distribution list" From: Bernd Raichle Subject: Re: Automatic italic correction In-Reply-To: Anselm Lingnau's message of Thu, 25 Nov 93 14:59:04 +0100 <9311251356.AA12878@ifi.informatik.uni-stuttgart.de> Anselm Lingnau said on Thu, 25 Nov 93 14:59:04 +0100: AL> Bernd Raichle writes: > Remember, that fonts can have positive and negative slant and that you > have to read the next character before you will insert an italic > correction (e.g. " \rm {\it f}. " but " \rm {\it f}l " ). AL> Shouldn't the insertion of italic corrections be deferred until (just before?) AL> e.g. a paragraph is broken into lines? By that time, all the `next characters' AL> (viz. their dimensions) will be available, and we don't have to peek ahead at AL> the input. Shouldn't the building of ligatures and insertion of kerns be deferred until (just before) a paragraph is broken into lines or a \hbox is packed??? If DEK has realized TeX in this way, there were no problems with different ligature reconstitutions after the hyphenation pass of the line breaking algorithm. [The problems are: * "shelf{}ful" problem mentioned in the TeX book, because the empty group {} inhibits the ligature "ff", but it is reinserted if TeX hyphenates the paragraph * "shelf{}ful" problem again, because non-expandable tokens (empty group, \relax, ...) prevents ligatures and insertion of kerns * ligatures containing non-letters: TeX--The Program, "41. Post-hyphenation", \S ??: ``[...] further complications arise in the presence of ligatures that do not delete the original characters. When punctuation precedes the word being hyphenated, \TeX's method is not perfect under all possible scenarios, because punctuation marks and letters can propagate information back and forth. [...]'' * other problems ] Bernd Raichle ======================================================================== Date: Thu, 25 Nov 93 16:47:22 LCL Reply-To: Mike Piff From: Mike Piff Subject: Re: Automatic italic correction %> %>Bernd Raichle writes: %> %>> Remember, that fonts can have positive and negative slant and that you %>> have to read the next character before you will insert an italic %>> correction (e.g. " \rm {\it f}. " but " \rm {\it f}l " ). %> %>Shouldn't the insertion of italic corrections be deferred until (just before?) %>e.g. a paragraph is broken into lines? By that time, all the `next characters' %>(viz. their dimensions) will be available, and we don't have to peek ahead at %>the input. %> %>Anselm I agree with this, it has to be done after macro expansion or it isn't always going to work. The "automatic" macros in LaTeX2e work by peeking forwards and backwards, but Frank has pointed out some odd circumstances where they fail through gulping a \fi as I recall. It *must* be the paragraph stage we work on. We are now into fantasy land,eg, what if an exceptionally tall character is followed by a minute one, such as \Large\it A\ \tiny\rm b. It makes sense to have a left and right correction too, before and after your character. Mike Piff %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% Dr M J Piff %% e-mail: %% School of Mathematics and Statistics %% %% University of Sheffield %% M.Piff@sheffield.ac.uk %% Hicks Building %% %% SHEFFIELD S3 7RH %% Telephone: (0742) 824431 %% England %% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%