1 0 0 0 0 0 PIPE DREAMS + PIPE DREAMS 0 What's New in CMS/TSO Pipelines + What's New in CMS/TSO Pipelines + _________________ 0 0 0 Melinda Varian 0 Office of Computing and Information Technology Princeton University 87 Prospect Avenue Princeton, NJ 08544 USA --.-- Email: Melinda@Princeton.EDU Web: http://PUCC.Princeton.EDU/~Melinda Telephone: (609) 258-6016 0 0 0 0 VM Workshop June, 1996 1 0 Page ii Pipe Dreams ------------------------------------------------------------------------ 0 CONTENTS + CONTENTS + ________ 0 I. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 1 + I. Introduction 1 0 II. BatchPipeWorks (TSO Pipelines) . . . . . . . . . . . . . . . . . 2 + II. BatchPipeWorks (TSO Pipelines) 2 0 III. Distribution of the CMS Pipelines Runtime Library . . . . . . . 2 + III. Distribution of the CMS Pipelines Runtime Library 2 0 IV. Plumbers' Workbench . . . . . . . . . . . . . . . . . . . . . . 3 + IV. Plumbers' Workbench 3 0 V. New Application Programming Interface . . . . . . . . . . . . . . 5 + V. New Application Programming Interface 5 Encoded Pipeline Specifications . . . . . . . . . . . . . . . . . 5 Co-Pipes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 0 VI. New sources of Pipelines information . . . . . . . . . . . . . . 6 + VI. New sources of Pipelines information 6 PIPELINE CFORUM . . . . . . . . . . . . . . . . . . . . . . . . . 6 VM Collection CD-ROM . . . . . . . . . . . . . . . . . . . . . . . 6 CMS Pipelines Web Page . . . . . . . . . . . . . . . . . . . . . . 7 Author's help . . . . . . . . . . . . . . . . . . . . . . . . . . 7 0 VII. PRPQ features re-introduced since CMS 8 . . . . . . . . . . . . 10 + VII. PRPQ features re-introduced since CMS 8 10 fileslow/diskslow . . . . . . . . . . . . . . . . . . . . . . . . 11 qsam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 synchronise . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 pipcmd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 ldrtbls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 nucext . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Writing Assembler pipeline stages . . . . . . . . . . . . . . . . 16 0 VIII. New features added in CMS 10, 11, 12, and beyond . . . . . . . 17 + VIII. New features added in CMS 10, 11, 12, and beyond 17 Infrastructure improvements . . . . . . . . . . . . . . . . . . . 17 Syntax enhancements . . . . . . . . . . . . . . . . . . . . . . . 18 Legalization of common errors . . . . . . . . . . . . . . . . . 18 Self-escaping characters . . . . . . . . . . . . . . . . . . . 18 Hexadecimal and binary "delimited strings" . . . . . . . . . . 18 Generalized input ranges . . . . . . . . . . . . . . . . . . . 20 Secondary streams on command processor stages . . . . . . . . . 21 Case-insensitive matching . . . . . . . . . . . . . . . . . . . 22 New features for existing stages . . . . . . . . . . . . . . . . . 22 >, >>, and disk . . . . . . . . . . . . . . . . . . . . . . . . 22 asatomc, c14to38, mctoasa, and overstr . . . . . . . . . . . . 22 block and deblock . . . . . . . . . . . . . . . . . . . . . . . 23 buildscr . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 change . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 chop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 console . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 count . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 delay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 diskrandom . . . . . . . . . . . . . . . . . . . . . . . . . . 28 diskupdate . . . . . . . . . . . . . . . . . . . . . . . . . . 29 drop and take . . . . . . . . . . . . . . . . . . . . . . . . . 29 1 0 Pipe Dreams Page iii ------------------------------------------------------------------------ 0 duplicate . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 faninany . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 fullscreen . . . . . . . . . . . . . . . . . . . . . . . . . . 29 instore . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 ispf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 join . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 lookup . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 mdiskblk . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 pipcmd . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 rexx . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 REXX device drivers . . . . . . . . . . . . . . . . . . . . . . 32 spec . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 split . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 starmsg . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 state and statew . . . . . . . . . . . . . . . . . . . . . . . 35 unique . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 xlate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 On record delay and preventing pipeline stalls . . . . . . . . . . 36 copy and elastic . . . . . . . . . . . . . . . . . . . . . . . 36 elastic with two input streams . . . . . . . . . . . . . . . . 41 Other new stages and subcommands . . . . . . . . . . . . . . . . . 43 abbrev . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 addrdw . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 all . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 beat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 begoutput . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 browse . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 casei and zone . . . . . . . . . . . . . . . . . . . . . . . . 45 combine . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 deal and gather . . . . . . . . . . . . . . . . . . . . . . . . 48 dfsort . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 escape . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 fanintwo . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 fanoutwo . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 frtarget and totarget . . . . . . . . . . . . . . . . . . . . . 53 joincont . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 juxtapose . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 listpds and snake . . . . . . . . . . . . . . . . . . . . . . . 58 not . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 parcel and random . . . . . . . . . . . . . . . . . . . . . . . 61 pick . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 piptstkw . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 predselect . . . . . . . . . . . . . . . . . . . . . . . . . . 63 reverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 scm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 spill . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 sqlselect . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 starmon . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 starsys . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 suspend . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 timestamp . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 untab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 1 0 Page iv Pipe Dreams ------------------------------------------------------------------------ 0 verify . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 xrange . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 New and enhanced stages for accessing REXX variables . . . . . . . 67 vardrop . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 varfetch . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 varset . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 varload . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 rexxvars . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 New stages for use in full-screen applications . . . . . . . . . . 68 Pipeline termination enhancements . . . . . . . . . . . . . . . . 69 0 IX. Support for the Personal/370 and Personal/390 . . . . . . . . . 74 + IX. Support for the Personal/370 and Personal/390 74 0 Appendix A: CMS Pipelines Runtime Library Distribution . . . . . . . 76 + Appendix A: 0 Terms and Conditions . . . . . . . . . . . . . . . . . . . . . . . 76 Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 0 Appendix B: A Simple Co-Pipe Application with Two Fittings . . . . . 78 + Appendix B: 0 Appendix C: A Simple Example of Using an Encoded Pipeline + Appendix C: Specification . . . . . . . . . . . . . . . . . . . . . 81 0 Appendix D: Example of a Feedback Loop with "lookup" . . . . . . . . 86 + Appendix D: 1 0 Pipe Dreams Page 1 ------------------------------------------------------------------------ 0 0 What's New in CMS/TSO Pipelines + What's New in CMS/TSO Pipelines + _________________ 0 0 I. INTRODUCTION + I. INTRODUCTION + ________________ 0 The last few releases of VM/ESA have introduced many powerful new . features into CMS Pipelines. The CMS 10 version of Pipes was also + _____________ _____ . shipped as part of VM/ESA 370 Feature Release 1.5, thus giving 370 . Feature users their first "indoor plumbing". Princeton is fortunate to have access to field-test versions of CMS Pipelines through a + _______________ joint-study arrangement with IBM Danmark, so I am here today to tell you about the new features in Pipes from a customer point of view (and + _____ perhaps also to provide a few useful plumbing tips as well). 0 : CMS 12(1) (VM/ESA 2.1.0) contains much new Pipelines function. + _________ : Unfortunately, substantial portions of this new function are not : documented in the CMS help files or manuals. In fact, much of what was : added to Pipes in CMS 11 is still not documented in the CMS 12 help + _____ : files and manuals. Fortunately, the author of Pipes has provided + _____ : extensive documentation for all of the new function, and that : documentation is now available to customers. 0 : This paper describes almost all of the new function added to CMS + ___ : Pipelines since it was incorporated into CMS, including what has been + _________ : added since CMS 12. However, for four major CMS 12 enhancements, I will : refer you to other papers. These enhancements are full support for the : Shared File System, support for the new OpenEdition Byte File System, : the addition of an arithmetic and report-generating capability (also : known as the "407 emulation") to the spec stage, and support for TCP/IP. : For a good discussion of the support for SFS and BFS, see John : Hartmann's SHARE 84 paper on that subject (available on the Runtime : Library Web page). The enhancements to spec are covered very well in : John's spec tutorial, which is included as an author's help file in : CMS 12. The new TCP support in Pipes is being discussed in another + _____ : session at this conference. 0 0 -------------------- 0 | (1) For the most part, text marked in this paper with a vertical bar as | the "change bar" refers to changes introduced after CMS 12; text | marked with a colon as the change bar refers to function new in | CMS 12; while text marked with a period as the change bar refers to | changes introduced in CMS 11. 1 0 Page 2 Pipe Dreams ------------------------------------------------------------------------ 0 II. BATCHPIPEWORKS (TSO PIPELINES) + II. BATCHPIPEWORKS (TSO PIPELINES) + ___________________________________ 0 | Last September, IBM announced BatchPipeWorks as a feature of Release 2 | of the BatchPipes/MVS product. Although one would never guess it from | the announcement letter (295-400), BatchPipeWorks is a full-fledged | version of TSO Pipelines. It contains Pipes function equivalent to that + _____________ _____ | being shipped in CMS 12. In addition, it supports OpenEdition/MVS and | has many MVS-specific built-in programs: 0 | o mvs: Rewrite a physical sequential dataset or a member of a PDS + >mvs | o >>mvs: Append to a physical sequential dataset + >>mvs | o command: Issue TSO commands + command | o listcat: Obtain dataset names + listcat | o listdsi: Obtain information about datasets + listdsi | o listispf: Read PDS directory with ISPF information + listispf | o readpds: Read members from a PDS + readpds | o state: Verify that datasets exist + state | o sysdsn: Test whether dataset exists + sysdsn | o sysout: Write system output dataset + sysout | o sysvar: Load system variables into the pipeline + sysvar | o tso: Issue TSO commands and write response to the pipeline + tso | o writepds: Store members into a PDS + writepds 0 | Unfortunately, the BatchPipes manuals fail to document much of the | Pipelines function that is shipped in BatchPipeWorks. However, full + _________ | documentation for TSO Pipelines is available from two sources that I + _____________ | will mention shortly. 0 III. DISTRIBUTION OF THE CMS PIPELINES RUNTIME LIBRARY + III. DISTRIBUTION OF THE CMS PIPELINES RUNTIME LIBRARY + _______________________________________________________ 0 | In May, 1993, SHARE submitted a requirement to IBM stating, | "Applications written in CMS Pipelines should be permitted to ship the + _____________ | Pipelines runtime library with the applications, to unlicensed parties. + _________ | This ability exists for most languages." Three months later, IBM closed | that requirement as "announced", but did not say how one could go about | actually shipping Pipes with a software product without getting sued. + _____ 0 | IBM and Princeton recently signed a software license agreement that | allows Princeton to redistribute the December, 1995, version of the CMS + ___ | Pipelines runtime library royalty-free to anybody and that also allows + _________ | people who get the version from Princeton to redistribute it themselves | royalty-free. In other words, we can all now package a recent version | of the CMS Pipelines runtime library with our software products, + ______________ | including shareware, so there is no reason not to use the recent version | in one's software. Note, however, that there is no official support for + no | this version. You are likely to get help if you report any problems in | one of the Pipelines fora, but there is no guarantee of that. + _________ 1 0 Pipe Dreams Page 3 ------------------------------------------------------------------------ 0 | The CMS Pipelines runtime library distribution is available via the + _____________ | World Wide Web (http://PUCC.Princeton.EDU/~pipeline/) or by anonymous | FTP from PUCC.Princeton.EDU (cd anonymous.376). You will find a README | FIRST file that stipulates the conditions, which are primarily that you | must not remove the IBM copyright notice and that IBM retains title to | the CMS Pipelines runtime library being distributed. (An excerpt from + _____________ | this README file is in Appendix A.) The files that make up the runtime | library include the MODULE, the help library, and softcopy documentation | (in LIST3820 and BOOK format). There is no separate message repository, | as the messages are included in the MODULE file. This version of the | runtime library contains function well beyond what was shipped in | CMS 12, but it does not include MVS-specific function or IBM | Internal-Use-Only function, such as the pattern stage. 0 | The document contained in the BOOK and LIST3820 files is what is known | familiarly as "the real Pipes book", the CMS/TSO Pipelines Guide and + _____ ____________________________ | Reference written by the author of Pipelines, John Hartmann. It + _________ _________ | provides complete documentation for the version of Pipelines shipped in + _________ | CMS 12 and in BatchPipeWorks. 0 IV. PLUMBERS' WORKBENCH + IV. PLUMBERS' WORKBENCH + ________________________ 0 | The Plumbers' Workbench (PWB) has been described as "a workstation | application that has CMS in its back room". It is a client-server | application in which a CMS session is the server to any number of OS/2 | clients. 0 | When complete, the Plumbers' Workbench will be a graphical environment | for developing applications based on CMS/TSO Pipelines, a "visual + _________________ | builder" application for assembling pipelines. It will also have | extensive debugging capabilities, including a superset of the function | now available with Chuck Boeheim's nifty PIPEDEMO program. 0 | Version 1 of PWB, which is available now for CMS, includes most of the | infrastructure needed for the final product, but does not yet have the | graphical interface. It is, however, quite useful in itself. The | primary function it provides is the ability to pipe workstation data | through a pipeline on CMS and to return the results to the workstation. | The only transport currently supported is TCP/IP. John hopes to add | APPC support and to port the server to TSO. 0 | The Plumbers' Workbench provides a CMS command, PWB,(2) and an OS/2 | command, VMPIPE. The OS/2 VMPIPE command has as its argument a CMS | pipeline specification to be executed by the server. VMPIPE sends that | pipeline specification and its own standard input to CMS. The server on 0 -------------------- 0 | (2) Anyone interested in developing a TCP-based server using the new | Pipes TCP support would do well to read PWB EXEC and the REXX + _____ | filters that it invokes. 1 0 Page 4 Pipe Dreams ------------------------------------------------------------------------ 0 | CMS executes the pipeline and returns the result to VMPIPE, which writes | it to its standard output. Here is a very simple example of using | VMPIPE: 0 | +----------------------------------------------------------------------+ | | | | | [C:\]echo hello | vmpipe "append literal goodbye | xlate upper" | | | HELLO | | | GOODBYE | | | | | +----------------------------------------------------------------------+ 0 | This is an OS/2 pipeline with two stages, "echo" and "vmpipe". The | output of this OS/2 command goes to stdout, which in this case was the | display. The argument to VMPIPE is a CMS pipeline specification, which | is enclosed in quotes to prevent OS/2 from interpreting the stage | separator as an OS/2 pipe symbol. When the server on CMS executes this | pipeline specification, it will prefix a connector that gets input from | the workstation (which in this case would be a record containing the | word "hello" in lower case), and it will suffix a connector that sends | the output to the workstation, thus producing the output you see here in | the OS/2 window. 0 | The VMPIPE command supports much fancier pipeline specifications than | this, however, including multi-stream pipelines and complete subroutine | pipelines (where the user specifies the connectors). In case the | pipeline specification is too long for an OS/2 command, VMPIPE can be | told to read it from a file. 0 | The VMPIPE command also exposes lower-level infrastructure functions, | which can be useful in their own right. For example, the VMPIPE upload | and download functions have become my preferred method of moving files | between OS/2 and CMS, as they are much faster than other methods I have | used. 0 | Version 1 of PWB can be obtained by anonymous FTP from IBM: 0 | ftp ftp.almaden.ibm.com | cd redbooks/SG244523 | get readme.1st | binary | get sg244523.exe 0 | PWB is documented in an IBM International Technical Support Organization | (ITSO) "redbook", SG24-4523, which is now orderable. Softcopy for that | publication is included in the FTP distribution. The book also includes | a discussion of the process of developing PWB and relates a CMS heavy's | first experience of doing development under OS/2, with references to | publications and techniques that the author found helpful. 1 0 Pipe Dreams Page 5 ------------------------------------------------------------------------ 0 V. NEW APPLICATION PROGRAMMING INTERFACE + V. NEW APPLICATION PROGRAMMING INTERFACE + _________________________________________ 0 | CMS 12 and BatchPipeWorks include a new application programming | interface for CMS/TSO Pipelines. Although the code is included in those + _________________ | two products, the function is not supported by either. I will show an | example that uses Assembler, but the new interface can be used from any | programming language that can generate the required parameter lists, | which will be just about any. There is currently no callable service | for these interfaces, so REXX programs would require a function package | to provide a small layer of interface code. 0 | The new API is documented in a paper of John Hartmann's called CMS/TSO + _______ | Pipelines PIPE Command Programming Interface. The interface provides + _____________________________________________ | two new capabilities: 0 | o Encoded pipeline specifications, which are a new way to specify a | pipeline, and | o Co-pipes, which are a new way to transfer control between an | application and CMS/TSO Pipelines. + _________________ 0 | Encoded Pipeline Specifications + Encoded Pipeline Specifications + _______________________________ 0 | Encoded pipeline specifications are used for invoking Pipes from another + _____ | program. In an encoded pipeline specification, the pipeline structure | is specified by means of a control block structure, rather than by means | of a command string with meta-characters to indicate the ends of stages | and pipeline segments. This is an example of an encoded pipeline | specification: 0 | +----------------------------------------------------------------------+ | | | | | PIPSCBLK ENCODED,TYPE=RUNPIPE | | | PIPSCSTG TYPE=BEGIN | | | PIPSCSTG TYPE=STAGE,VERB='<',ARGS=(FITFSCB+FSCBFN-FSCBD,18) | | | PIPSCSTG TYPE=STAGE,VERB='append',ARGS=(APPARGS,APPLEN) | | | PIPSCSTG TYPE=STAGE,VERB='join',ARGS='*' | | | PIPSCSTG TYPE=STAGE,VERB='pad',ARGS=PADARGS | | | PIPSCSTG TYPE=STAGE,VERB='storage',ARGS=STORARGS | | | PIPSCSTG TYPE=DONE | | | | | +----------------------------------------------------------------------+ | Specifying a pipeline by means of a control block structure facilitates | dynamic construction of pipeline commands. Appendix C has an example of | using this encoded pipeline specification. 1 0 Page 6 Pipe Dreams ------------------------------------------------------------------------ 0 | Co-Pipes + Co-Pipes + ________ 0 | Co-pipes make it possible for an application to be written partly as a | traditional program and partly as a pipeline. With co-pipes, an | application that is not in any way controlled by CMS/TSO Pipelines has + _________________ | the ability to access data in the pipeline and to provide data for the | pipeline. The application program and the pipeline run as co-routines, | both maintaining their separate states. They take turns running, and | they transfer control between themselves by a resume operation. A | fitting stage is the space-warp through which records move between the | pipeline and the application program; it is the application's agent in | the pipeline. A pipeline can contain any number of fitting stages. | Appendix B has an example of a co-pipe application with two fitting | stages. There isn't time in this session to go through the details of | the elegant protocol by which the application and the pipeline | communicate, but you will find it explained quite clearly in John's | paper. 0 | The API code is still under development, so I would suggest using the | version contained in the runtime library distribution rather than the | version shipped with CMS 12. 0 VI. NEW SOURCES OF PIPELINES INFORMATION + VI. NEW SOURCES OF PIPELINES INFORMATION + _________________________________________ 0 : A very encouraging development for the Pipelines community of late has + _________ : been the rapid expansion of high-quality information about using Pipes. + _____ 0 : PIPELINE CFORUM + PIPELINE CFORUM + _______________ 0 : IBM's internal plumbing community has moved all of its discussions from : the old internal forum to the CFORUM ("customer forum") that you and I : can get to through the "Your Connection to VM" program. In Europe, one : reaches the PIPELINE CFORUM by using the CONFER fastpath of DIAL-IBM and : then choosing the VM conferences. PIPELINE CFORUM is a very lively : place where plumbers of all degrees of skill are welcomed and given : assistance. I hope that more customers will come to the party soon. 0 | VM Collection CD-ROM + VM Collection CD-ROM + ____________________ 0 | Starting in December, 1995, the VM Collection CD-ROM (SK2T-2067-09) also + _____________ | includes "the real Pipes book" (SL26-0018-02 in bookshelf IKJ2P401 on + _____ | Disc 1). Serious plumbers will find this book to be an invaluable | source of authoritative information about Pipelines. The VM Collection + _________ _____________ | CD-ROM provides viewer programs for OS/2, DOS, and Windows. The files | from the CD-ROM can be uploaded to VM or MVS for viewing with | BookManager READ, and they can be served by BookManager BookServer for | viewing with Web browsers. 1 0 Pipe Dreams Page 7 ------------------------------------------------------------------------ 0 | Users of BatchPipeWorks can order the VM Collection CD-ROM to obtain + ______________ | complete TSO Pipelines documentation. + _____________ 0 : CMS Pipelines Web Page + CMS Pipelines Web Page + ______________________ 0 : Plumbers should also be aware of the CMS Pipelines Web page maintained + _____________ : by Christian Reichetzeder, of the University of Vienna. The URL is http://www.akh-wien.ac.at/pipeline.html. 0 . Author's help + Author's help + _____________ 0 . Perhaps the most significant enhancement to CMS Pipelines in CMS 11 was + _____________ . the release to customers of the IBM-internal Pipes help files. These + _____ . help files were written by the author of CMS Pipelines and are his + _____________ . specifications for the functions they describe. I believe that serious . plumbers will find these help files to be substantially more useful than . those that have been distributed with VM/ESA up to now. 0 . The decision to release the author's help files in CMS 11 was made at . the last moment, in response to pleas from customers, so this feature is : not documented in the CMS 11 (or CMS 12) manuals and there are a few . loose ends in its packaging. However, using these new help files is not . difficult and is well worth learning to do. 0 . The standard help files derived from the VM/ESA CMS Pipelines Reference + _______________________ . continue to be distributed. As in the past, they can be viewed using . either HELP PIPE or PIPE HELP. The author's help files, which are . stored in PIPELINE HELPLIB on the help disk, are viewed by using the new . ahelp stage, e.g.: + ____ 0 . pipe ahelp locate 0 . The best way to learn to use this new capability is to issue the command . pipe ahelp help and mentally translate everything it says about the help . stage to reference ahelp instead. 0 . One of the reasons to use pipe ahelp is that it is much faster than CMS . HELP; it consumes only about one-third as much CPU time. And I believe . that you will find that the help files it displays are substantially . more definitive than the standard help files; they contain information . that is simply not in the VM/ESA manuals and help files. When you get . CMS 11 or 12, I suggest that you enter the commands pipe ahelp pipe, . pipe ahelp scanner, and pipe ahelp pipmod. I think you will learn some . things that you didn't know before. 0 . You will also find that these help files emphasize the unifying . principles of CMS Pipelines, which makes it easier to attain mastery of + _____________ . the large amounts of function and allows the help files to be less . repetitious. For example, they do not describe the syntax of the . arguments for each stage in detail, because in CMS Pipelines an argument + _____________ 1 0 Page 8 Pipe Dreams ------------------------------------------------------------------------ 0 . that is described in one stage as, say, a "delimited string" has exactly . the same rules as the delimited string arguments for any other stage. . If one does not remember these rules, it is very simple to find them by . entering pipe ahelp syntax, which puts one into a file that contains the . rules for all syntactic variables. 0 . Another difference between pipe ahelp and standard CMS HELP is that . ahelp allows one to use the full capabilities of XEDIT; that is, it does . not attempt to limit the user to a small subset of XEDIT subcommands. . In fact, you may be surprised to discover that pipe ahelp invokes your . own XEDIT profile. I find that I like that. If you find that you do . not, then you need only specify the XEDIT NOPROFIL option when you use . ahelp: 0 . pipe ahelp locate ( noprof 0 . In fact, if you specify XEDIT options on ahelp, they are stored in a . lasting global variable (PIPELINE_HELP_XEDIT_OPTIONS in the UNNAMED . repository) that is inspected whenever you invoke ahelp, so you need do . this only once to set your options permanently. (The options are . upper-cased, which precludes specifying a profile that has a mixed-case . name.) 0 . The function of pipe ahelp is also tailorable through two user exits: 0 . o XITHLP01 REXX is invoked when the help text is being fed into XEDIT. . A common use for XITHLP01 is to match the character set of the help . file to the character set of one's terminal. For example, if your . terminal is unable to display text characters (such as box corners), . you might wish to use an XITHLP01 REXX similar to this one: 0 . +----------------------------------------------------------------------+ . | | . | /* XITHLP01 REXX: Exit to translate lines for no TEXT feature */ | . | | . | 'CALLPIPE (name XITHLP01)', | . | '| *:', | . | '| xlate *-* ea-eb + ee-ef + ab-ac + bb-bc + bf - 8f +', | . | 'af o', /* Bullet */ | . | '9e ?', /* Plus/minus */ | . | 'e0 ?', /* Backslant */ | . | '| *:' | . | | . | Address Command 'SET TEXT OFF' | . | Exit | . | | . +----------------------------------------------------------------------+ 0 . o Similarly, XITHLP02 REXX is invoked when sending help menu items into . XEDIT. 1 0 Pipe Dreams Page 9 ------------------------------------------------------------------------ 0 . Because the message numbers were changed when CMS Pipelines was moved + _____________ . into ESA, the author's help files are for a different set of message . numbers. For that reason, in CMS 11, pipe ahelp still invoked standard . CMS HELP to display the VM/ESA help files for Pipes error messages. + _____ . This was unfortunate, as the author's help files do a much better job of . explaining error messages. In particular, the author's help files . attempt to analyze errors from the user's point of view and to explain . what users commonly do that results in their getting a particular error . message. For example, this is the VM/ESA help file for error 2968: 0 . +----------------------------------------------------------------------+ . | | . | (c) Copyright IBM Corporation 1990, 1992 | . | | . | 2968E stream streamnum connected but not used | . | | . | Explanation: A stream is connected that is not used by the stage. | . | | . | System Action: RC=2968. | . | The stage ends. | . | | . | User Response: Do not connect the indicated stream. | . | | . +----------------------------------------------------------------------+ 0 . As you can see, it tells you no more than the error message had already . told you, that a stream was connected that should not have been. 0 . On the other hand, this is the author's help file for the same error: 1 0 Page 10 Pipe Dreams ------------------------------------------------------------------------ 0 . +----------------------------------------------------------------------+ . | | . | 5785-RAC (c) Copyright IBM Corp, 1991. SB5409 (c) IBM Danmark A/S | . | 1992. | . | | . | 539E Do not connect unused stream | . | | . | Explanation: A stream is connected that the stage does not use. | . | This is often a symptom of an incorrect placement of a label | . | reference. | . | | . | A selection stage (for instance, "find") detects that the | . | secondary input stream is connected. "collate" detects that the | . | tertiary input stream is connected. "fanin", "faninany", "merge", | . | and "overlay" detect a connected output stream other than the | . | primary one. "fanout" detects a connected input stream other than | : | the primary one. "lookup" detects that input stream 4 is | : | connected. | . | | . | System Action: The stage terminates with return code 539. | . | | . | User Response: Ensure that the reference to the label that | . | specifies the secondary output stream for a selection stage is | . | after an endcharacter. | . | | . +----------------------------------------------------------------------+ 0 . This explains the error rather well. It tells you what situations cause . various stages to issue the message, and it tells you what you probably . did to cause the error, based on the author's experience. That is, . probably you did not purposely connect an unneeded stream, but, rather, . you misplaced a label reference or left out an endcharacter. 0 : Fortunately, in CMS 12, ahelp invokes the author's help even for error : messages. 0 VII. PRPQ FEATURES RE-INTRODUCED SINCE CMS 8 + VII. PRPQ FEATURES RE-INTRODUCED SINCE CMS 8 + _____________________________________________ 0 Before discussing the features added in CMS 10, 11 and 12, let me describe briefly the features re-introduced in recent releases, as these may be unfamiliar to some. 0 When CMS Pipelines was incorporated into CMS 8 in ESA 1.1, significant + _____________ function from the CMS Pipelines PRPQ was not supported or documented + ______________ (although the code was included in that release and could be used if one knew about it). CMS 9 picked up support for four of the stages from the PRPQ that were unsupported in CMS 8, fileslow, pipcmd, qsam, and . synchronise. CMS 11 picked up support for one more stage from the PRPQ, : ldrtbls, and CMS 12 picks up support for the nucext stage from the PRPQ. : CMS 12 also picks up support for writing pipeline stages in Assembler. 1 0 Pipe Dreams Page 11 ------------------------------------------------------------------------ 0 fileslow/diskslow + fileslow/diskslow + _________________ 0 fileslow is the diskslow stage from the PRPQ. (The diskxxxx stages were renamed to filexxxx when Pipes was moved into VM/ESA.) diskslow is used + _____ to read and write CMS files, just as disk[fast] is, but, unlike disk, diskslow does not block the records it reads or writes. It uses the FSREAD and FSWRITE interfaces to the file system to read and write records, issuing a call for each record. Thus, diskslow can be used in situations where unblocked reads or writes are required, such as when several stages are writing to the same file concurrently or when you wish to FINIS a file now and then to checkpoint it. 0 A handy way to FINIS a file after every n records are written is to use + _ a spec stage to swallow up n-1 records with read options and then change + _ the nth record into a FINIS command, which can be executed by a + _ subsequent command stage. Thus, this fragment will FINIS OUTPUT FILE after every fifth record has been written: 0 +----------------------------------------------------------------------+ | | | '...', | | '| diskslow output file ', /* Never do this with ">". */ | | '| spec read read read read /FINIS OUTPUT FILE */ 1', | | '| command' /* Issue FINIS command. */ | | | +----------------------------------------------------------------------+ 0 Another feature of diskslow is that, unlike the other stages used to read and write CMS files, it can start reading or writing from a particular record number: 0 pipe diskslow osmacro1 maclib s from 42000 | . . . 0 qsam + qsam + ____ 0 The qsam stage is used to do I/O via CMS OS simulation, under control of the FILEDEF settings. For example, this fragment picks up two fields from block 0 of an unkeyed BDAM file on an MVS disk volume: 1 0 Page 12 Pipe Dreams ------------------------------------------------------------------------ 0 +----------------------------------------------------------------------+ | | | 'FILEDEF $FILE DISK' fn ft mode '( RECFM F LRECL 1024 DSORG PS', | | 'BLKSIZE 1024' | | | | 'PIPE', /* Examine Block 0 of $File: */ | | 'qsam $file |', /* Read as QSAM. */ | | 'take 1 |', /* Get 0th block. */ | | 'spec', /* Select desired fields: */ | | '3.2 c2d 1', /* $DATUSED, */ | | 'write', /* (write it out), */ | | '5.2 c2d 1 |', /* $DATLAST. */ | | 'var $datused |', /* Save in REXX variable. */ | | 'drop 1 |', /* Neat trick. */ | | 'var $datlast' /* Save in REXX variable. */ | | | +----------------------------------------------------------------------+ 0 The argument to the qsam stage is a DDNAME that has been defined by a FILEDEF command. Because of the take 1 stage, only the first record is read from that dataset. A spec stage is then used to select one field from the record, write that out as a record by itself, and then select one other field to comprise the next output record. The first var stage stores the contents of the first record into a REXX variable. (A var stage uses only the first record it receives; any other records are simply passed on without being examined.) Then the drop 1 stage discards the record used to set the first variable, so that the second var stage uses what had been the second record for setting its variable. 0 qsam can also be used for both reading and writing tapes, in which case its behavior can be defined by a combination of FILEDEF and LABELDEF commands for such purposes as processing multi-volume datasets. This fragment writes a multi-reel tape dataset: 0 +----------------------------------------------------------------------+ | | | Queue 'A01000 A01001 A01002' /* Volids for LABELDEF. */ | | Queue /* EOF for LABELDEF. */ | | 'LABELDEF TRANTAPE VOLID ? FSEQ 1 FID TRANFILE' | | 'FILEDEF TRANTAPE TAP1 SL 1 ( RECFM FB BLKSIZE 20000', | | 'LRECL 100 DEN 6250' | | 'PIPE', | | '...|', /* Get data for tape. */ | | 'qsam trantape' /* Write multi-reel file.*/ | | | +----------------------------------------------------------------------+ 1 0 Pipe Dreams Page 13 ------------------------------------------------------------------------ 0 synchronise + synchronise + ___________ 0 synchronise is used to force records on parallel streams of a pipeline to march through that pipeline in unison. synchronise waits until it has a record on each of its input streams and then copies those records to the corresponding output streams. It is especially useful for throttling back a stage, such as duplicate *, that can produce an infinite number of records. In this example, it is used to synchronise the processing of records with external events; one record is read from the calling pipeline and written back to it each time an SMSG is received: 0 +----------------------------------------------------------------------+ | | | /* PACER REXX: Use external events to pace record processing */ | | | | 'CALLPIPE (endchar ?)', | | '*.input: |', /* Input from caller. */ | | 'sync: synchronise |', /* Correlate with SMSGes. */ | | '*.output:', /* Output to caller. */ | | '?', | | 'starmsg CP SET SMSG IUCV |', /* Capture SMSGes. */ | | 'sync: |', /* Synchronize with input. */ | | 'hole' /* Into the bit bucket. */ | | | | saverc = RC | | Address Command 'CP SET SMSG OFF' /* Back to normal. */ | | | | Exit saverc*(saverc<>12) /* RC = 0 if end-of-file. */ | | | +----------------------------------------------------------------------+ 0 Only when the starmsg stage has captured an SMSG and has made it available on the secondary input stream of the synchronise stage does synchronise process the next record from its primary input stream. Then one record is read from each stream and copied to the corresponding output stream. Since synchronise's primary output stream is connected to the calling pipeline and its secondary output stream is connected to a hole stage, the record from the calling pipeline is passed back to the calling pipeline and the SMSGed record is discarded. Then no further records are processed until the next SMSG is received. This subroutine pipeline will continue running until synchronise encounters an end-of-file on any of its input or output streams. (This is the reason for the hole stage. The records on synchronise's secondary output stream are not needed by this application, but that stream must remain connected to prevent synchronise from terminating before we wish it to.) 1 0 Page 14 Pipe Dreams ------------------------------------------------------------------------ 0 pipcmd + pipcmd + ______ 0 pipcmd is a pipeline stage that treats the contents of its input records as pipeline commands to be issued. Usually, the command to be issued is a callpipe. Conceptually, this is very simple, but in practice using pipcmd typically involves writing a spec stage that builds another spec stage, so examples tend to look rather formidable. However, pipcmd is well worth learning to use, because it is an easy way to capture data from a record both before and after the record is transformed in some way. 0 +----------------------------------------------------------------------+ | | | /* FORBOB EXEC: Read contents of a list of files into the */ | | /* pipeline, prefixing each record with the */ | | /* file name and the record number. */ | | | | /* Parms: Arguments to be used for LISTFILE command. */ | | | | /* This pipe generates a series of pipelines and runs them */ | | /* with pipcmd. The generated pipelines are of the form: */ | | /* */ | | /* CALLPIPE (stagesep !), */ | | /* < fn ft fm, * Read a file. */ | | /* ! spec [fn ft fm[ 1 number 21 1-* 31, * Prefix each record.*/ | | /* ! *: * Send to caller. */ | | | | 'PIPE', | | ' command LISTFILE' Translate(Arg(1)), /* Get list of files.*/ | | '| spec', /* Build CALLPIPE commands.*/ | | '/CALLPIPE (stagesep !)', | | ' output file a' /* Write output to disk. */ | | | +----------------------------------------------------------------------+ 0 In the example above, the spec stage receives records produced by the CMS LISTFILE command and reformats each of them into a callpipe command that will read the named file and preface each record from the file with its record number and the name of the file it came from before feeding it into the main pipeline. One callpipe command per file flows from the spec stage into the pipcmd stage, which issues it. Because the pipelines specified by these callpipe commands have an output connector at the end, the records produced by the called pipelines flow on the output stream of the pipcmd stage to the > stage. 0 In most cases where pipcmd is used, the same result could be achieved by putting the callpipe into a REXX filter. FORBOB EXEC is equivalent to the following EXEC and REXX filter: 1 0 Pipe Dreams Page 15 ------------------------------------------------------------------------ 0 +----------------------------------------------------------------------+ | | | /* FORDAVE EXEC: Read contents of a list of files into the */ | | /* pipeline, prefixing each record with the */ | | /* file name and the record number. */ | | | | /* Parms: Arguments to be used for LISTFILE command. */ | | | | 'PIPE', | | ' command LISTFILE' Translate(Arg(1)), /* List of files. */ | | '| fordave', /* Bring them into pipe. */ | | '| > output file a' /* Write output to disk. */ | | | +----------------------------------------------------------------------+ 0 +----------------------------------------------------------------------+ | | | /* FORDAVE REXX: Send contents of files into the pipe */ | | /* preceded by file name and record number. */ | | | | Signal On Error | | | | Do Forever /* Do until get EOF. */ | | 'PEEKTO record' /* Examine the next record.*/ | | 'CALLPIPE', /* Create a pipeline. */ | | '<' record '|', /* Read specified file. */ | | 'spec', /* Reformat each record: */ | | '/'record'/ 1', /* file name 1-20, */ | | 'number next', /* record number 21-30, */ | | '1-* next |', /* original record 31-*. */ | | '*:' /* Send into main pipeline.*/ | | 'READTO' /* Consume input record. */ | | End | | | | Error: Exit RC*(RC<>12) /* RC = 0 if EOF. */ | | | +----------------------------------------------------------------------+ 0 However, using pipcmd is slightly faster than invoking a REXX filter, and it is often more convenient to keep all the function in one file. 0 | Since CMS 12 was packaged, pipcmd has been enhanced to support a stop | keyword, which causes it to stop when the specified number of output streams have gone to end-of-file. 1 0 Page 16 Pipe Dreams ------------------------------------------------------------------------ 0 . ldrtbls + ldrtbls + _______ 0 . ldrtbls is used to run a stage that has been loaded with the CMS LOAD . command. It resolves an entry point in the CMS loader tables to find . the program to run. The entry point can be: 0 . o An executable machine instruction (not '00'x), which is then invoked . as a stage. . o A program descriptor, which describes the stage to run. . o A pipeline command, which is run as a subroutine pipeline. . o An entry point table, from which the first word of the argument string . is resolved. . o A look-up routine, which resolves the first word of the argument . string. 0 . ldrtbls is very useful when testing a stage written in Assembler or a . REXX stage compiled with the OBJECT option, as it obviates the need to . rebuild a filter package repeatedly. Here it is used to test a compiled . REXX filter: 0 . rexxc myfilter rexx ( object . load myfilter . pipe . . . | ldrtbls myfilter | . . . 0 . And here it is used to run an Assembler filter whose entry point name is . PIPJEWEL: 0 . vmfhasm pipjwl pipe . load pipjwl ( nomap . pipe . . . | ldrtbls pipjewel | . . . 0 : nucext + nucext + ______ 0 : nucext is very similar to ldrtbls. It is used to run a stage that has : been loaded as a nucleus extension. 0 : Writing Assembler pipeline stages + Writing Assembler pipeline stages + _________________________________ 0 : User-written Assembler language stages are supported in CMS 12, and a : subset of the Assembler interface is documented in the VM/ESA : manuals.(3) 0 -------------------- 0 : (3) The publication SL26-0020, CMS Pipelines Toolsmith's Guide and + ______________________________________ : Filter Programming Reference, contains a chapter entitled "Assembler + ____________________________ : Interface", which, although somewhat dated, documents more of the : macros used in the Assembler interface. 1 0 Pipe Dreams Page 17 ------------------------------------------------------------------------ 0 : One common reason for writing an Assembler language stage is, of course, : to achieve high performance. Another is to use an interface that is not : available in REXX. At Princeton, we have also found it convenient to : build Assembler language stages to move existing code into pipelines. A : very small amount of pipeline scaffolding wrapped around existing : Assembler code suffices to do this. For example, shortly after we got : Pipes, my colleague Serge Goldstein used this approach to move a local + _____ : JES3 module into a pipeline stage to create a new CMS command for our : end users. The module in question converts the traditional Princeton OS : JOB card to the IBM standard. Serge was able quite easily to re-use : that JES3 code, and the users were pleased to be able to convert their : old JOB cards without running a batch job. 0 : For an introduction to writing Assembler pipeline stages, I refer you to : the tutorial presentation made by John Hartmann at SHARE 84 (available : on the Runtime Library Web page). Anyone who wants to write Assembler : filters or read the CMS Pipelines source code will find that tutorial to + _____________ : be quite valuable. The macros used in the Assembler interface can be : found in DMSOM MACLIB in CMS 8 and later, so you need not be running : CMS 12 to use the Assembler interface. 0 : When you are writing an Assembler stage, you may, if you wish, write it : using the macro language in which CMS Pipelines itself is written, but + _____________ : you are not required to do so. This macro language is documented in a : Field Systems Centre Technical Report, Virtual Machine CMS Pipelines: + _______________________________ : Procedures Macro Language. The macros that implement the macro language + _________________________ : can also be found in DMSOM MACLIB in CMS 8 and later. 0 VIII. NEW FEATURES ADDED IN CMS 10, 11, 12, AND BEYOND + VIII. NEW FEATURES ADDED IN CMS 10, 11, 12, AND BEYOND + _______________________________________________________ 0 Infrastructure improvements + Infrastructure improvements + ___________________________ 0 The author of CMS Pipelines has continued to improve the product's + _____________ infrastructure. There have been performance enhancements and enhancements in buffering algorithms, dispatching strategies, storage management, and other areas. 0 For example, a cp query stage now extends its buffer dynamically when no explicit buffer size is used and the default buffer cannot contain the response from the CP QUERY command. 0 The interaction between the console stage and user-written stages that use SET CMSTYPE has been reworked to produce more intuitive results. 0 Starting with CMS 10, the stage numbers reported in error messages count connectors and label references as stages. This change was made for the benefit of Chuck Boeheim's wonderful PIPEDEMO program. Also to facilitate PIPEDEMO, a pause stage has been added, and trace and events . options have been added to the runpipe stage. A detailed description of . the events records can be obtained by issuing the command pipe ahelp . pipevent. 1 0 Page 18 Pipe Dreams ------------------------------------------------------------------------ 0 . In CMS 11, runpipe was enhanced to support the Rita pipeline performance . profiler; there are new events records and a new msglevel option. The . runpipe msglevel option and the global and local msglevel options can . now be specified in hexadecimal notation. Also to facilitate Rita, . pipeline names longer than eight characters are now included in their : entirety in CMS Pipelines messages. In CMS 12, the new mask option on + _____________ : runpipe events reduces Rita's CPU utilization by an order of magnitude, and Rita is being distributed on the "Samples and Examples" disk. 0 | Since CMS 12 was packaged, Year 2000 support has been added to CMS/TSO + _______ | Pipelines. The keywords shortdate, isodate, and longdate have been + _________ | added to the aftfst, fmtfst, state, and statew stages. 0 Syntax enhancements + Syntax enhancements + ___________________ 0 Legalization of common errors: Several syntax changes have been made, + Legalization of common errors + _____________________________ some to legalize common user errors. For example, a leading stage separator is now allowed whether or not there are global options: 0 PIPE | < profile exec | . . . 0 and a leading pipeline end-character is now allowed after global options: 0 PIPE (endchar ?) ? < profile exec | . . . 0 Self-escaping characters: Stage separators and pipeline end-characters + Self-escaping characters + ________________________ are now "self-escaping"; for example, to split at stage separators, one doubles the stage separator character in the argument to the split stage: 0 . . . | split || | . . . 0 Hexadecimal and binary "delimited strings": All "delimited string" + Hexadecimal and binary "delimited strings" + ____________________________________________ arguments can now also be expressed as hexadecimal or binary constants. The syntax is upwards compatible with the specification of hexadecimal items for spec. These are now valid delimited strings: 0 o /abc/ as before o xF1F2 equivalent to "/12/" o b10000001 equivalent to "/a/" 0 Anywhere that a delimited string could be used in the past, one may now express the constant in any one of those forms. For example, 0 . . . | locate 30.1 x04 | . . . 0 searches for hexadecimal 04 in column 30, while this spec stage: 0 . . . | spec 1-* 1 b00000000 next | . . . 0 puts a byte of binary zeros at the end of each record. 1 0 Pipe Dreams Page 19 ------------------------------------------------------------------------ 0 The use of any of these three forms of "delimited strings" has been extended to new stages that are variants of the filters that take a non-delimited string argument: 0 strasmfind strasmnfind strfind strfrlabel strliteral strnfind strtolabel strwhilelabel 0 Thus, . . . | strliteral x04 | . . . 0 creates a one-byte record containing a value of hexadecimal 04, and these strfind stages: 0 . . . | strfind b00000000 | . . . . . . | strfind /&/ | . . . 0 select records that begin with a byte of binary zeros and an ampersand, : respectively. In CMS 12, strlocate has been added as a synonym for : locate, because so many people mistakenly expected such a stage to : exist. 0 Similarly, binary and hexadecimal strings are now supported in the argument to the change stage. These are all valid change stages: 0 ...| change /abc/def/ |... ...| change /abc/ ?def? |... ...| change /abc/?def? |... ...| change /abc/ xC2 |... ...| change /abc/xC2 |... ...| change xF1 /0/ |... : ...| change xF1 b00000000 |... ...| change /abc/ /def/ |... 0 : In CMS 10, change strings could be expressed as two delimited strings if : the first delimiter occurred only twice. That restriction is relaxed in : CMS 12, as shown in the last example above. A blank is optional after the first string. 0 . In CMS 11, the keyword string, which can be abbreviated to str, is . recognized wherever a "delimited string" argument is allowed. When . string is specified before an argument, the scanner assumes the argument . to be a delimited string and does not test it for being a hexadecimal or . binary constant, even though the delimiter character is "x" or "b". For . example: 0 . . . . | locate string xABCx | . . . 0 . selects records containing the string "ABC". chop, strip, and split, . which already scan for string as an alternative to anyof, accept the . keyword string for either purpose: 0 . . . . | chop string string xABCx | . . . . . . . | chop anyof string xABCx | . . . 0 . The first chop stage chops at the string "ABC", and the second chops at . the first occurrence of "A" or "B" or "C". In both cases, "string . xABCx" defines the literal string "ABC". In an ambiguous case (e.g., + ____ . chop string x0102), the original usage of string is assumed. 1 0 Page 20 Pipe Dreams ------------------------------------------------------------------------ 0 Generalized input ranges: A really powerful new feature of spec in + Generalized input ranges + _________________________ CMS 10 allowed one to specify the portions of an input record that are to be copied to the output record in terms of fields delimited by a "field separator" character. The default field separator is hexadecimal 05, the "tab" character. A different field separator can be specified with the keyword fieldseparator, which has a synonym of fs. The keyword fields, which can be abbreviated to f, is used to select fields delimited by the field separator character, just as the keyword words is used to select blank-delimited words. This spec stage defines the equal sign as its field separator and writes an output record containing only the second field, the "b": 0 pipe literal a=b=c| spec fieldsep = field 2 1 | console 0 This new feature is very useful for processing records from a database, for example, but it also greatly extends the parsing power of spec in general. (I will have an example of that later.) Note that a field, unlike a word, can be null, so these two pipelines do not give the same results: 0 pipe strliteral x40F140F2 | spec words 1-2 1 | console 0 pipe strliteral x40F140F2 | spec fieldsep 40 fields 1-2 1 | console 0 The first displays "1 2", while the second, which uses blank as the field separator, displays only "1", because the first blank-delimited field of its input record is null. 0 spec has also been enhanced to make its use of negative numbers for column, word, and field ranges more general, so, for example, "-5" is taken to mean the column range "-5;-5", i.e., the fifth from last + ____ column. 0 . In CMS 11, the definition of an "input range" was extended even further . to allow a different word separator to be specified with the keyword . wordseparator, which has a synonym of ws. 0 . CMS 11 added support for these generalized input ranges to locate, . nlocate, and xlate. Arguments that were defined in the past to be . "column ranges" were generalized to be like the spec input range. That . is, those arguments may be specified in terms of columns, words, or . fields and relative to either the beginning or the end of the record. . For example, these stages select records with the string "banana" in the . second word and the second-to-last word, respectively: 0 . . . . | locate word 2 /banana/ | . . . . . . . | locate word -2 /banana/ | . . . 0 : CMS 12 adds support for generalized input ranges to change, collate, : ispf, lookup, merge, sort, and unique. The pick and zone stages added : in CMS 11 also support generalized input ranges, as do the deal, gather, : and verify stages added in CMS 12. 1 0 Pipe Dreams Page 21 ------------------------------------------------------------------------ 0 . This fragment selects entries from a CMS NAMES file that contain :LIST . tags and splits them into their individual tag items for further . processing: 0 . +----------------------------------------------------------------------+ . | | . | 'PIPE (endchar ?)', /* Select lists from NAMES: */ | . | '<' Userid() 'NAMES |', /* Read user's NAMES file. */ | . | 'split |', /* Split into words. */ | . | 'f: find :|', /* Select tags. */ | . | 'xlate fieldsep . f1 upper |',/* Uppercase just tag name. */ | . | 'y: faninany |', /* Rejoin streams. */ | . | 'join * / / |', /* Join all records. */ | . | 'split before str /:NICK./ |',/* Split into NAMES entries.*/ | . | 'l: locate /:LIST./ |', /* Select those with lists. */ | . | 'split -1 after str / :/ |', /* Split into tag items. */ | . | . . . | . | '? f: | y:' /* Shunt non-tag words. */ | . | | . +----------------------------------------------------------------------+ 0 . The NAMES file is first split at blanks, so that every word becomes a . record. find : selects the records beginning with tag names and feeds . them into an xlate that upper-cases the tag name but not the tag value . that is separated from the tag name by a period. (Period is defined as . the fieldseparator character for the xlate; that makes the tag name the . first field of each of these records.) The two streams are merged and . then joined into a single record, which is then split before the string . ":NICK." to produce records that contain entire NAMES file entries. . Entries that don't contain ":LIST" tags are discarded. Then the entries . are split into individual tag items to be operated on further. 0 . split -1 after string may be an unfamiliar idiom. split after string . / :/ would split records at each occurrence of a colon preceded by a . blank, and the blank and the colon would remain attached to the end of . the left-hand record in each split. On the other hand, split -1 after . string / :/ leaves the blank at the end of the left-hand record and the . colon at the beginning of the right-hand record in each split. That is, . it splits the records one byte before the end of the specified string. . In this case, this idiom is used to make sure that the colon is at the . beginning of a word, signifying a tag name, because the tag values might . contain colons, and we don't want to make splits on those. 0 . Secondary streams on command processor stages: Starting in CMS 11, the + Secondary streams on command processor stages + _____________________________________________ . "host command processor" stages, cms, command, cp, and subcom, can have . a secondary output stream defined, in which case a record containing the . return code from each command is written to the secondary output after . all of the response lines from the command (or the command itself in the . case of subcom) have been written to the primary output. When these . stages have a secondary output stream, they do not terminate on unknown . commands, but they do terminate as soon as they discover that their . secondary output is not connected. The return code from these stages is 1 0 Page 22 Pipe Dreams ------------------------------------------------------------------------ 0 . zero regardless of the return codes from the commands they issue. These . stages do not allow a command string as an argument when they have a . secondary stream. When the cp stage has a secondary output stream and . it discovers that its buffer is too small to contain the entire response . from CP, it prefixes a plus sign to the return code written to its . secondary output. 0 . The new aggrc stage (which is documented in the author's help files) can . be placed in the secondary stream from these command processor stages to . aggregate the return codes by the same rules that these stages have used . themselves in the past. That is, aggrc expects records containing a . number on its input. It produces a single output record containing the . aggregate return code. If any of the numbers on its input is negative, . the aggregate is the minimum of the input numbers; otherwise, it is the . maximum. The aggregate return code is written to the output at . end-of-file on the input. If the write fails with a positive return . code, the return code from aggrc is set to the aggregate return code; otherwise, it is set to zero. 0 : Case-insensitive matching: In CMS 12, many existing stages have been + Case-insensitive matching + _________________________ : enhanced to support an anycase option, which provides case-insensitive : matching. Stages supporting this new option include: between, collate, : inside, lookup, merge, notinside, outside, pick, sort, and unique, as | well as the new abbrev, spill, and verify stages added in CMS 12. Since | CMS 12 and BatchPipeWorks, the anycase keyword has been added to | joincont, all of the strxxxxx stages, locate, and nlocate. In each : case, the new option precedes all other options. You should be aware : that doing case-insensitive matches is more expensive than is doing : case-sensitive matches. In particular, sort anycase should be used only : when necessary. 0 New features for existing stages + New features for existing stages + ________________________________ 0 Many of the familiar CMS Pipelines stages have had nifty new options + ______________ added. 0 : >, >>, and disk: In CMS 12, the stages that write CMS minidisks have + >, >>, and disk + ________________ : been enhanced not to sever their primary output stream when they receive : end-of-file on their primary input stream, unless there is a secondary : stream defined. The purpose of this change is to guarantee that the : file has been closed by the time the output stream is severed (which : will now happen when the stage terminates). Subsequent stages that are : triggered by the severing of that stream can then safely assume that the : file has been closed before their processing begins. 0 : asatomc, c14to38, mctoasa, and overstr: The stages that manipulate + asatomc, c14to38, mctoasa, and overstr + _________________________________________ : carriage control characters have been enhanced to support input records : of any size. 1 0 Pipe Dreams Page 23 ------------------------------------------------------------------------ 0 block and deblock: block and deblock have a new option eof to specify + block and deblock + _________________ : the DOS end-of-file character. When blocking, the terminate option : added in CMS 10 specifies that the last record should have a linend : character at the end; when deblocking, the terminate option specifies : that a linend character at the end of the last record should not result : in a null record being written to the output. 0 : deblock linend with the terminate option provides a powerful technique : for breaking records up, processing the pieces, and putting the pieces : back together without introducing a record delay or requiring the : overhead of a "sipping" subroutine pipeline. This example calculates : the "Soundex" values for the names read from the calling pipeline: 0 : +----------------------------------------------------------------------+ : | | : | 'CALLPIPE (endchar ? name Soundex)', | : | '*: |', /* Names from calling pipeline.*/ | : | 'xlate 1-* upper |', /* Upper-case the name. */ | : | 'xlate 2-* 00-FF 0', /* Convert to numbers, but 1st.*/ | : | 'A 0 B 1 C 2 D 3 E 0 F 1 G 2 H 0 I 0', | : | 'J 2 K 2 L 4 M 5 N 5 O 0 P 1 Q 2 R 6', | : | 'S 7 T 3 U 0 V 1 W 0 X 7 Y 0 Z 2 |', | : | 'spec 1-* 1 x00 next |', /* Delimit word for later. */ | : | 'deblock 1 |', /* One record per character. */ | : | 'unique first |', /* Don't want runs of the same.*/ | : | 'nfind 0|', /* Discard insignificant chars.*/ | : | 'deblock linend 00 terminate |', /* Restore word bounds. */ | : | 'chop 4 |', /* Truncate to 4 characters. */ | : | 'l: locate 1 |', /* Divert null records. */ | : | 'f: faninany |', /* Merge streams. */ | : | '*:', /* Soundices to calling pipe. */ | : | '?', | : | 'l: | spec /0/ 1 | f:' /* Here if Soundex is null. */ | : | | : +----------------------------------------------------------------------+ 0 : Each name read into this subroutine pipeline is translated, delimited : with one byte of binary zeros, and then broken up into one-byte records, : so that runs of the same value can be eliminated. The one-byte records : are then reassembled into records corresponding to the original input : records using deblock linend to recognize and remove the zero marker : delimiting the records. The terminate option prevents a spurious null : record from being produced when the last input record is deblocked. 0 block fixed now accepts the record length as a suboption, to force an error if the records in the pipeline are not the expected length. In CMS 10, several new options were added to specify additional blocking types: 0 o vs variable spanned records; o crlf PC-style blocking with '0D25'x between records; 1 0 Page 24 Pipe Dreams ------------------------------------------------------------------------ 0 o string records delimited by the specified string; o sf structured fields (halfword length field that includes its own length); o admsf like sf, but the length field is never spanned; and o gdf records containing GDF data (deblock only). 0 . In CMS 11, deblock has a new onebyte option, for deblocking logical . records that are prefixed by a one-byte length field. (That length . includes the length byte.) 0 : Many features have been added to block and deblock in CMS 12. Both now : support the awstape option, which specifies the format used for files : that simulate magnetic tapes on the Personal/370 and Personal/390. This : fragment from a REXX filter can read a tape and convert it to a format : that can be written to a CMS file that can then be downloaded to an OS/2 : or RS/6000 system, from which the associated P/370 or P/390 can read it : as though it were a tape: 0 : +----------------------------------------------------------------------+ : | | : | 'ADDPIPE', /* Post-process the output: */ | : | '*.output: |', /* Output from CALLPIPEs. */ | : | 'block 4096 awstape |', /* Block for AWSTAPE driver. */ | : | '*.output:' /* Write to the pipeline. */ | : | | : | Do Until records = 0 /* Loop through tape files: */ | : | 'CALLPIPE (endchar ?)', /* Process one tape file: */ | : | 'tape |', /* Read until an end-of-file. */ | : | 'c: count lines |', /* Count records in file. */ | : | '*:', /* Send records to ADDPIPE. */ | : | '?', | : | 'c: |', /* Record count to here. */ | : | 'var records' /* Store for EOF test. */ | : | 'OUTPUT' /* Null record for tape mark. */ | : | End | : | | : +----------------------------------------------------------------------+ 0 : The added pipeline is inserted into the output stream of this stage to : block the records produced by the stage into the format expected by the : AWSTAPE device driver. The Do Until loop reads a tape until it finds a : file that contains no records, i.e., until it has read the double + ____ : tapemarks at the end of the tape. It copies each file from the tape to : the output of the stage, which is connected to the input of the ADDPIPE. : As each file is read, the record count is stored in the index variable : for the Do Until, and an OUTPUT subcommand is used to write a null : record, which will be interpreted as a tape mark. 0 : deblock has a new option decimal, which deblocks records that begin with : a length field in printable decimal. 1 0 Pipe Dreams Page 25 ------------------------------------------------------------------------ 0 : deblock has another new option monitor, which can be used to deblock : VM/ESA CP monitor data. The format is like the "structured field" : format, but the blocks are expected to be padded with nulls, and records : do not span blocks. This subroutine pipeline will deblock monitor data : to individual monitor data records, discarding the control records: 0 : +----------------------------------------------------------------------+ : | | : | 'CALLPIPE (endchar ?)', /* Deblock CP monitor data: */ | : | '*: |', /* Input from pipeline. */ | : | 'b: between x0000 2 |', /* Control rec + header rec. */ | : | 'strnfind x0000 |', /* Select data header records. */ | : | 'spec 13-* |', /* Remove 3-word header. */ | : | 'f: faninany |', /* Merge w/continuation blocks.*/ | : | 'deblock monitor |', /* Deblock monitor data blocks.*/ | : | 'addrdw sf |', /* Restore record length field.*/ | : | '*:', /* Output to pipeline. */ | : | '?', | : | 'b: | f:' /* Shunt continuation blocks. */ | : | | : +----------------------------------------------------------------------+ 0 : This subroutine pipeline is an order of magnitude faster than my best : earlier attempts to deblock CP monitor data using either REXX or a : pipeline. The first block of each set of data records is located by the : fact that it comes immediately after a control record. The control : records are discarded, as are the first three words of the first block : of each set of data records. All the data blocks are then deblocked : into the constituent monitor records. This process removes the two-byte : length field at the beginning of each monitor record. addrdw ("Add : Record Descriptor Word") is a new stage that will be described in more : detail later. What it does here is prefix a two-byte binary length : field to each record that passes through it, thus restoring the monitor : records to the format expected by most tools. 0 : A more complete example of using deblock monitor can be found in : MDATPEEK REXX on the CMS 12 Samples and Examples disk (normally MAINT's : 193 disk). 0 : The new fromright option on deblock fixed writes output records from : right to left through the input record. This is particularly useful for : dumping a trace table that is moving forward in storage; the last entry : is written first. 0 : block and deblock now support the cms4 and sf4 options, which are like : the cms and sf options except that the record length fields are : fullwords rather than halfwords. 0 : The new textfile option on block and deblock is provided for use with : the Byte File System. deblock textfile is a synonym for deblock linend : terminate. block textfile forces on the terminate option and specifies : that input records and delimiters are never to be spanned over output 1 0 Page 26 Pipe Dreams ------------------------------------------------------------------------ 0 : blocks. (If the sum of the lengths of the input record and the : delimiter is greater than the block size, a separate output record is : written for each; otherwise, as many records and delimiters as possible are blocked into the buffer.) 0 | Since CMS 12 was packaged, two new keywords have been added to deblock. | The rfc959 keyword specifies deblocking according to RFC 959 (FTP). | Each logical record contains a flag byte followed by the two-byte length | of the remainder of the record. Note that since the length field is | embedded in the record, it is retained in the output record. 0 | The rdw (for "Record Descriptor Word") keyword provides a do-it-yourself | deblocking kit. Its syntax is: 0 | DEBLOCK RDW [STRIP] 0 | The range specifies the location of the record descriptor, which | contains the length of the record less the second number. When the | record descriptor begins in the first column, the strip keyword may be | used to delete the record descriptor from the output record. Thus, | deblock rdw 2.2 3 performs the same operation as deblock rfc959; a | two-byte length field starts in column 2, and that length field is 3 | less than the entire length of the record. Similarly, deblock sf could | be implemented as deblock rdw 1.2 0 strip; that is, a two-byte length | field starts in column 1; that length field is inclusive--it counts its | own length--so the record to be deblocked is exactly that long; and the | length field is not retained in the output record. 0 buildscr: buildscr loads the APL/TEXT translate table from its + buildscr + ________ secondary input stream, if one is defined, and a new (eighth) argument . specifies whether the target screen supports extended highlighting. In . CMS 11, buildscr accepts hexadecimal notation for the attribute bytes . and has a ninth argument to specify encoding for an APL keyboard or a . TEXT keyboard. It also now supports control records of any length. 0 change: change now writes unchanged records to its secondary output + change + ______ : stream, if one is defined. The anycase keyword added to change in : CMS 10 not only does a case-insensitive match; it also preserves the : case for acronyms and words that have the first letter capitalized: 0 Input: Change: Output: 0 Abc /abc/def/ Def ABC /abc/def/ DEF 0 I find the anycase option particularly useful for deleting a given string without regard to case. And I frequently find myself defining the secondary output for change, so that I can simultaneously change and select. Using these two features together is even better: 1 0 Pipe Dreams Page 27 ------------------------------------------------------------------------ 0 +----------------------------------------------------------------------+ | | | 'CALLPIPE (endchar ?)', /* Look for a PATH command: */ | | 'stem cmsut1. |', /* Load original mail stem. */ | | 'strip leading |', /* Discard leading blanks. */ | | 'c: change anycase 1.5 /path // |', /* Is first word PATH? */ | | 'append literal |', /* (Null record, in case not.) */ | | 'var path', /* Yes, store PATH's argument. */ | | '?', | | 'c: |', /* Non-PATH lines to here. */ | | 'buffer |', /* Hold until stem all read. */ | | 'stem cmsut1.' /* Replace the mail stem. */ | | | +----------------------------------------------------------------------+ 0 In this example, the mail file in the stem "cmsut1" is examined for the occurrence of a PATH command (in upper-, lower-, or mixed-case), and the stem is rewritten with all PATH commands removed and with the variable "path" set either to null or to the argument of the first PATH command encountered. To accomplish this, the stem is loaded into the pipeline, and leading blanks are removed to assure that its commands begin in column one. The change anycase stage selects any PATH commands and removes the command itself from those records. An append literal stage is used to insert a null record into the primary output stream from change anycase, in case no PATH commands are found, and the variable "path" is set to the value of the first record read by the var stage. All records that are not PATH commands are diverted to the secondary output of the change anycase stage, where they are buffered until all input has been read and are then rewritten as the "cmsut1" stem. 0 . chop: In CMS 11, chop supports an argument of "0" (for completeness). + chop + ____ 0 console: console has a new option dark for reading from the console + console + _______ without displaying what the user enters. This is ideal for prompting for passwords, as shown here: 0 'PIPE console dark | take 1 | var password' 0 . count: In CMS 11, count records is a synonym for count lines. Starting + count + _____ in CMS 10, count has new options minline and maxline to determine the length of the shortest and longest records. This can be quite useful for doing such things as building NETDATA headers. This example from John Hartmann shows an interesting new idiom using count minline; this subroutine pipeline shifts a file left as many bytes as necessary to assure that at least one line has no leading blanks: 1 0 Page 28 Pipe Dreams ------------------------------------------------------------------------ 0 +----------------------------------------------------------------------+ | | | /* Shift file left until at least 1 line is flush to the left */ | | | | 'CALLPIPE (endchar ?)', | | '*: |', /* Input from calling pipe. */ | | 'o: fanout |', /* Make a second copy. */ | | 'chop after not blank |', /* Leading blanks + 1. */ | | 'count minline |', /* Minimum leading + 1. */ | | 'spec', /* Build CALLPIPE command: */ | | '/CALLPIPE', /* CALLPIPE */ | | '*.input.1: ||', /* *.input.1: | */ | | 'spec/ 1 1-* nw /-* 1 ||',/* spec n-* 1 | */ | | '*:/ next |', /* *: */ | | 'c: pipcmd |', /* Run the CALLPIPE. */ | | '*:', /* Output to caller. */ | | '?', | | 'o: |', /* Second copy of file. */ | | 'buffer |', /* Hold until counted. */ | | 'c:' /* To PIPCMD's secondary. */ | | | | Exit RC*(RC<>12) /* RC = 0 if end-of-file. */ | | | +----------------------------------------------------------------------+ 0 A second copy of the input records is made and sent to the second pipeline segment to be buffered while the first set of records is manipulated to determine the least number of leading blanks in any line. chop after not blank truncates each record immediately after its first non-blank character, and count minline produces a single output record containing the number n, the length of its shortest input record. n is + __ _ one greater than the number of columns the file is to be shifted left. spec uses n to construct a spec stage in a callpipe command. When that + _ callpipe is executed by the pipcmd stage, it reads the buffered records from the second pipeline segment and copies them to its output, starting with the nth character of each record, thus shifting the file to the + _ left as required. 0 : delay: delay is enhanced in CMS 12 to support time increments of + delay + _____ : fractions of a second. This pipeline waits 0.05 seconds: 0 : 'PIPE literal +0.05 | delay' 0 : Up to six digits may be specified following the decimal point, allowing : for microsecond resolution. 0 diskrandom: diskrandom has a new option blocked to specify that all + diskrandom + __________ records in a range should be written into the pipeline as a single record. 1 0 Pipe Dreams Page 29 ------------------------------------------------------------------------ 0 diskupdate: diskupdate supports blocked records for fixed-format files. + diskupdate + __________ Additional logical records can be appended to a record. 0 . drop and take: drop * and take * are supported in CMS 11 and specify to + drop and take + _____________ : drop or take all records. In CMS 12, the new bytes option on drop and : take treats the input as a byte stream, dropping or taking only the : specified number of bytes, even if the breakpoint occurs in the middle : of a record. 0 duplicate: The argument to duplicate can be -1, in which case it + duplicate + _________ produces no output. 0 : faninany: faninany has a new strict option in CMS 12 that causes + faninany + ________ : faninany to suspend itself before waiting for a record. This allows : other ready pipeline stages to run and, thus, ensures that faninany : always passes the record from the lowest-numbered stream that has : produced one. 0 fullscreen: fullscreen has a new option asynchronously which causes it + fullscreen + __________ to write to the terminal as records arrive, reading from the terminal only in response to an attention interrupt. 0 instore: instore has a new operand pgmlist to write a REXX program + instore + _______ list. This is used for in-stream REXX programs (of which more later). 0 ispf: The ispf stage now reads variables from the function pool; all + ispf + ____ restrictions on multiple use of variables have been removed. New operands vcopy and vreplace have been added to interface directly to the function pool variables. 0 join: join supports a third operand, which specifies the maximum output + join + ____ record length. Thus, to flow a paragraph contained in the stem "p" into lines of 72 characters or fewer, one would first split the contents into blank-delimited words and then rejoin all the resulting single-word records, putting a single blank between each and forming them into output records of 72 bytes or less: 0 'PIPE stem p. | split | join * / / 72 | stem flowed.' 0 : In CMS 12, join has a new keylength option, which causes it to join : consecutive records starting with the same "key". The keyword keylength : is followed by a number specifying the length of the field to be : compared between consecutive records. The key field is discarded from : all but the first record in a set of matching records, and the records : are then joined into a single record (or into multiple records if the : specified output length is exceeded). 0 : lookup: I have always thought that Pipes would be a treasure even if + lookup + ______ _____ : the only capability it offered were that implemented by the lookup : stage. Now, in CMS 12, lookup has been extensively enhanced and has : become even more valuable. As you will recall, lookup processes an : input stream (the "detail records") against a reference (the "master 1 0 Page 30 Pipe Dreams ------------------------------------------------------------------------ 0 : records"), comparing a key field. When a detail record has the same key : as a master record, one or both of the records are passed to the primary : output stream. Unmatched detail records are passed to the secondary : output stream. Unmatched master records are passed to the tertiary : output stream after end-of-file has been reflected on the primary input : stream. Or, if the count option has been specified, all master records : are passed to the tertiary output stream, each prefixed with a count of : the number of matching detail records found. 0 : As I have already mentioned, lookup now supports having its key fields : defined as generalized input ranges, and it accepts an anycase option to : specify that case is to be ignored when keys are being matched. It also : supports the new option autoadd, which indicates that any unmatched : detail records are to become master records. One thing that can be : achieved with the autoadd option is to write only one occurrence of each : record from a file that contains duplicate records, without sorting the : file and without delaying the records: 0 : pipe (endchar ?) . . . | l: lookup autoadd ? l: | . . . 0 : Because autoadd is specified here, each unmatched detail record is added : to the reference after it has been written to the secondary output of : lookup. Thus, any subsequent duplicate records will be matched and : written to the primary output of lookup, which is not connected in this : case. 0 : lookup now supports up to four input streams and up to six output : streams: 0 : o Records on the tertiary input are considered to be additional master : records. They are added to the reference as they arrive. (Use the : keyword strict to ensure that the tertiary input stream is read before : the primary when both have a record available.) : o Records on the quaternary input are also in the master record format. : Any records stored in the reference that have the same key as a record : read from the quaternary input are removed from the reference and : written to the quaternary output stream (if the count option would : have caused them to be written to the tertiary eventually). : o In the past, when the secondary input was being read to build the : reference, duplicate master records were deleted. Duplicate master : records (from either the secondary or tertiary input) that are not : stored in the reference are now passed to the quinary output stream : rather than being deleted. : o Unmatched deletes from the quaternary input stream are passed to the : senary output stream. 0 : lookup also now allows duplicate master records to be stored in the : reference, if the keyword allmasters is specified. When a detail record : matches more than one master record, it is written to the primary output : with copies of all those masters. If the keyword pairwise is specified, : copies of the detail record are paired with its matching masters on the : primary output. 1 0 Pipe Dreams Page 31 ------------------------------------------------------------------------ 0 : For an example of using the new streams on lookup, see Appendix D. 0 +----------------------------------------------------------------------+ | | | /* UNERASE EXEC: Dangerous! Use only on DDR copy of minidisk! */ | | /* This EXEC attempts to "unERASE" the most recently */ | | /* erased file by flip-flopping the DOP (Disk Origin */ | | /* Pointer) in block 3 between 4 and 5. */ | | | | Parse Upper Arg fm . | | | | 'PIPE command QUERY DISK' fm '| drop 1 | spec 8.4 1 | var cuu' | | | | 'PIPE (name UNERASE)' , | | 'mdiskblk number read' fm '3 |', /* Read block 3 (DOP at 30).*/ | | 'xlate 30.1' , /* Flip-flop the DOP: */ | | '00-FF 0' , /* anything bad to "0"; */ | | '04 05' , /* flip 04x to 05x; and */ | | '05 04 |' , /* flip 05x to 04x. */ | | 'nlocate 30.1 /0/ |' , /* In case all screwed up. */ | | 'mdiskblk write' fm '|' , /* Write block 3 if DOP OK. */ | | 'spec 30.1 c2x 1 |' , /* Format the DOP. */ | | 'console' /* Display DOP on console. */ | | | | 'ACCESS' cuu fm /* Re-ACCESS the minidisk. */ | | | +----------------------------------------------------------------------+ 0 mdiskblk: mdiskblk has a new option number which causes it to preface + mdiskblk + ________ the records it reads with a record number. It has also been enhanced to write CMS file system blocks, in addition to reading them. The most + _____ obvious application for this is an UNERASE EXEC (shown on the previous page) that reads block 3 of a minidisk, flip-flops the Disk Origin Pointer (DOP), and then rewrites block 3. In UNERASE, an mdiskblk number read stage is used to read block 3 of the minidisk and prepend the block number to the block. Then an xlate stage flip-flops the DOP byte in column 30 by translating "4" to "5" and vice versa. As those + ___________ are the only legal values for the DOP byte, the xlate stage should never translate that byte to "0", but, if it does, then the subsequent nlocate stage will discard the record and produce no input for mdiskblk write to write onto the disk. Normally, the DOP is flip-flopped and mdiskblk write removes the prepended block number and replaces block 3 on the minidisk with the remainder of the contents of the record from the pipeline, thus pointing the file system to the previous version of the . directory for that minidisk. (Although the write keyword was added in . CMS 10, it is documented only in the author's help file.) 0 . pipcmd: In CMS 11, the pipcmd stage supports the rexx pipeline command. + pipcmd + ______ 1 0 Page 32 Pipe Dreams ------------------------------------------------------------------------ 0 rexx: Starting with CMS 10, the rexx stage can read its program from an + rexx + ____ . input stream. In CMS 11, the rexx stage supports compiled REXX in . filter packages and on its input stream. 0 REXX device drivers: Starting with CMS 10, rexxvars, stem, var, and + REXX device drivers + ___________________ varload can be used with addpipe. The REXX environment they access is the one in effect at the initial PIPE command (which is the only one sure to remain throughout the life of the added pipeline). 0 These stages have a new producer keyword, which specifies that the REXX environment to be accessed is that of the stage connected to the input stream of the stage that issued the subroutine pipeline containing the rexxvars, stem, var, or varload. As the author's help file says, "This is a somewhat esoteric option." 0 . In CMS 11, rexxvars, stem, var, and varload have a new main keyword, . which specifies that the REXX environment to be accessed is the one that . issued the PIPE command or the runpipe stage. This allows a subroutine . pipeline to set variables in its caller without knowing the number of . REXX environments in the path. A number is optional after producer or . main to specify the number of REXX environments to go back beyond the . producer or main environment. 0 . In CMS 11, stem, var, and varload support the new keywords symbolic and . direct, which specify whether the variable names are to be substituted . further (which is the default for stem and var), or are to be used . exactly as they are written (which is the default for varload). For . example, the first pipeline in this fragment sets the value of the . variable X.BANANA, while the second pipeline sets the value of the . variable X.HELLO: 0 . hello = 'BANANA' . 'PIPE literal xyz | var X.HELLO symbolic' . 'PIPE literal abc | var X.HELLO direct' 0 CMS 10 added the keyword tracking to the var stage. tracking specifies that var at the beginning of a pipeline should repeatedly retrieve and write the value of its variable until it receives a non-zero return code on a write to the pipeline. Later in a pipeline, var tracking sets the value of the variable each time it receives an input record. var tracking also has the useful characteristic that, unlike simple var, it does not drop the variable if it receives no input record. 0 . Prior to CMS 11, rexxvars always truncated the value of variables to 512 bytes, but one could retrieve the full value with var. However, the variable name specified for var was substituted, so the value it retrieved was for the derived variable name. This fragment (which comes from John Hartmann) uses CMS 10 function to retrieve the full values of all variables that have the stem BLUE: 1 0 Pipe Dreams Page 33 ------------------------------------------------------------------------ 0 +----------------------------------------------------------------------+ | | | 'PIPE (endchar ?)', /* Get all values of BLUE.: */ | | 'rexxvars |', /* Load all exposed variables. */ | | 'find n BLUE.|', /* Select only BLUE. names. */ | | 'buffer |', /* Wait until all loaded. */ | | 'spec', /* Format as input to VARLOAD: */ | | '/,!,/ 1', /* variable name ("!"); and */ | | '8-* next |', /* variable value (tail). */ | | 'varload |', /* Set value of "!" to tail. */ | | 's: synchronise |', /* Synchronize with VAR. */ | | 'hole', /* Pull all records through. */ | | '?', | | 'var BLUE.! tracking |', /* Get value of compound symbol*/ | | 's: |', /* Synchronize with VARLOAD. */ | | . . . /* Operate on values of BLUE. */ | | | +----------------------------------------------------------------------+ 0 rexxvars loads the names and the (possibly truncated) values of all exposed variables into the pipeline. The name records for the variables with the stem BLUE. are selected and buffered until all variables have been loaded. Then the tails of those compound symbols are converted to the format required for input to varload, a delimited string denoting a variable's name followed by its value. Here the variable's name is simply an exclamation point, and its value is the tail of the compound symbol. Thus, varload sets the value of the variable named "!" to each tail in turn, synchronized with var tracking, which repeatedly retrieves the compound variable with the symbolic name "BLUE.!" until the values of all variables with that stem have been loaded. 0 In CMS 10, stem has a new option from, which specifies the index of the first item of the array to be read into the pipeline or written from the pipeline: 0 pipe stem vogons. from 42 | . . . 0 spec: spec has a new keyword strip that is used to specify that an + spec + ____ input field be stripped of leading and trailing blanks before being converted and placed in the output field. For example, 0 . . . | spec 6.8 strip c2x 1 | . . . 0 left-aligns the contents of the field and does not convert trailing blanks to hexadecimal, while this spec prefixes each record with a 3-digit, left-aligned record number: 0 . . . | spec number strip 1.3 1-* next | . . . 0 The spec number option has two new suboptions, from and by, which specify the starting number and the increment. For example, this spec prefixes each record with a 3-digit, left-aligned record number, starting with 0 and incrementing by 2: 1 0 Page 34 Pipe Dreams ------------------------------------------------------------------------ 0 . . . | spec number from 0 by 2 strip 1.3 1-* next | . . . 0 : (In CMS 12, the from and by values can be up to 15 digits.) 0 The spec conversion routines have been expanded to allow additional combinations of the existing "from" and "to" options and to include two new "from" and "to" options, "i" (for "ISOdate") and "p" (for "packed decimal"). The internal format for an ISO date is the Julian date format used by MVS. '93068F'x or '0093068F'x represent March 9, 1993, so this pipeline displays the string "19930309": 0 pipe strliteral x93068F | spec 1-* c2i 1 | console 0 Three additional pairs of digits are allowed for the time (in hours, minutes, and seconds). 0 Conversions to and from packed decimal can specify the scaling factor in parentheses (abutted to the name of the conversion routine), so this pipeline displays the string "+93.068": 0 pipe strliteral x93068F | spec 1-* c2p(3) 1 | console 0 : Very major enhancements to spec are included in CMS 12. It now supports : multiple output streams with the outstream keyword, and there is a new : output placement option nextfield. But, most importantly, spec has been : enhanced with a vast amount of arithmetic and report-generating : capability. This new capability is described very well in the author's : help files for CMS 12. To see a detailed reference, enter pipe ahelp : specref. For a tutorial presentation, enter pipe ahelp spectut. 0 . split: In CMS 11, split has a new keyword, minimum, to specify the + split + _____ . least number of characters that must be kept together. For example, to . deblock a 3270 inbound data stream, where the buffer addresses are . 14-bit and could thus contain '11'x, this split would guarantee that . each Set Buffer Address order ('11'x) is followed by at least the two . bytes of address information, so that none of the two-byte buffer . addresses is split, even if it contains '11'x: 0 . . . . | split minimum 3 before 11 | . . . 0 . starmsg: In CMS 11, starmsg supports an input stream. When starmsg is + starmsg + _______ . not first in a pipeline, it issues its input lines as CMS commands and . terminates when the input reaches end-of-file. This may be useful for . capturing the response from a CP message issued as a side effect of a . CMS command; for example, this fragment discards the CP "DETACHED" . message: 0 . 'CP SET CPCONIO IUCV' . 'PIPE literal RELEASE C ( DET | starmsg | hole' . 'CP SET CPCONIO OFF' 1 0 Pipe Dreams Page 35 ------------------------------------------------------------------------ 0 state and statew: state and statew have a new option quiet to suppress + state and statew + ________________ return code 28 when a file is not found and another new option nodetails to copy the input record (which contains the name of the file) to the primary output when the file exists (rather than writing an output record containing a description of the file). Together, these options provide a capability for feeding, say, a getfiles stage with the names of files that are known to exist, thus avoiding ugly return codes. 0 unique: unique has a new option pairwise for comparing pairs of + unique + ______ consecutive records from its input stream. If the records in a pair are equal (within the specified range), they are discarded; if they are not equal, they are selected. Any unpaired input record is also selected. This fragment uses the pairwise option in testing whether the records in its input stream are already sorted on a particular field: 0 +----------------------------------------------------------------------+ | | | . . . | | 'o: fanout |' , /* Make two copies of records. */ | | 'sort 19.3 |' , /* Sort this copy on cuu's. */ | | 's: spec' , /* Interleave two copies: */ | | '1-* 1 write' , /* copy record from stream 0, */ | | 'select 1 1-* 1 |' , /* copy record from stream 1. */ | | 'unique 19.3 pairwise |' , /* Select unmatched pairs. */ | | errormsg , /* Format error messages. */ | | '?', | | 'o: |' , /* Second copy of records. */ | | 'buffer |' , /* Buffer while sorting other. */ | | 's: |' , /* Go compare with others. */ | | | +----------------------------------------------------------------------+ 0 The fanout stage makes two copies of the input stream. One set of records is sorted; the other is held in a buffer while the first is being sorted. Then spec is used to feed pairs of records, one from each set, into unique pairwise, which discards the matching pairs and feeds any unmatched pairs into a set of error message stages, which can analyze them to report on the records that are out of order in the original file. 0 xlate: xlate now has the synonym translate. In CMS 10, support was + xlate + _____ added to xlate for four additional code pages, 274 (Belgium), 275 (Brazil), 297 (France), and 871 (Iceland). In addition, the default . code page (500) can be specified as a no-op. In CMS 11, xlate also . supports codepages 819 (ISO 8859-1), 850 (PC variant), 437, 863, and : 865. In CMS 12, xlate supports codepage 1047, which is used by : OpenEdition and C/370. 0 Beginning with CMS 10, xlate supports any number of "default tables"; their operations are combined. So, to convert codepage 37 to or from ASCII with a single xlate stage, one would use: 0 . . . | xlate 1-* from 37 e2a | . . . 1 0 Page 36 Pipe Dreams ------------------------------------------------------------------------ 0 . . . | xlate 1-* a2e to 37 | . . . 0 . In CMS 11, xlate can read the initial translate table from its secondary . input stream and will use that as it is, unless translation elements to . apply atop it are supplied in the arguments to the stage. 0 On record delay and preventing pipeline stalls + On record delay and preventing pipeline stalls + ______________________________________________ 0 CMS 10 added new function for preventing pipeline stalls, even when pipelines contain loops. 0 copy and elastic: copy is a simple stage that puts a "quantum delay" (a + copy and elastic + ________________ tiny bit of elasticity) into a pipeline. copy is simply the traditional "null" stage (a readto/output loop) made official and is the best thing to use when a pipeline needs only the potential for a one-record delay in order not to stall. 0 elastic is the long-awaited "buffer-as-required" stage; it is a general solution to the problem of pipeline stalls, a stage that prefers to write records but will read and buffer as many records as may be necessary to keep the pipeline going. 0 Perhaps a brief review of pipeline stalls would be in order, to provide the background for introducing elastic. First, it is important to understand that pipeline stalls are the penalty we pay for the fact that the pipeline dispatcher moves records through a multi-stream pipeline in such a way that the order of their arrival at the end of the pipeline is predictable. By understanding the way in which records pass through a pipeline, one can learn to prevent stalls while retaining predictability. Pipeline stalls most commonly arise from this sort of pipeline topology: 0 +----------------------------------------------------------------------+ | | | +----------+ +------+ +------+ +----------+ | | | |----| |----| |----| | | | | | +------+ +------+ | | | | ---| splitter | | joiner |--- | | | | +------+ | | | | | |----------| |----------| | | | +----------+ +------+ +----------+ | | | +----------------------------------------------------------------------+ 0 A "splitter" stage writes records to two or more output streams. Later in the pipeline, a "joiner" stage reads records from those same streams. 0 When any stage writes an output record, it remains "blocked" until that record has been "consumed" by the stage connected to its output stream. That is, it does not regain control until after the record has been read with the equivalent of the pipeline command readto. However, most 1 0 Pipe Dreams Page 37 ------------------------------------------------------------------------ 0 pipeline stages do not consume a record when they first read it. They first read the record with the equivalent of a peekto command, which does not consume it. They do not consume the record until after they have written it to their own output stream and it has been consumed by the stage to which they wrote it. (Stages that process records in this way are said not to "delay the record".) Thus, when splitter stages, such as locate, drop, fanout, and others, write a record on one output stream, they must then wait until that record has been consumed before they can write another record to any of their output streams. 0 If all the stages in the multi-stream portions of a pipeline like the one shown above are of the sort that do not delay the record, then each of them passes the record along without consuming it until after each of the subsequent stages has consumed it. Ultimately, then, this splitter stage must wait for the joiner stage to consume each record before the splitter can write the next one. If the joiner stage cannot consume a record that the splitter stage has written, then the pipeline stalls. 0 If the joiner stage is faninany, then this configuration will never stall, because faninany always reads any record that is available on any of its input streams. Other joiner stages are more exacting, however. Some, such as spec and synchronise, wait until they have a record available on each of their input streams before they consume any of them and then consume them in stream-number order. Others, such as collate and merge, wait until they have a record available on each of their input streams and then choose which one to consume based on the contents of the records. fanin is the most extreme; it consumes all the records from one input stream before it will read any records from any other input stream. 0 A further complication is that a stage in the multi-stream portion of a pipeline may "buffer" the records; that is, some stages, such as sort and instore, consume all their input records before writing any output records and, thus, may keep the joiner stage waiting for records on one of its input streams. 0 So, in a pipeline where there is a stage that reunites streams that originated in a single stage earlier in the pipeline, there is a potential for pipeline stalls. The splitter stage may try to write onto one stream while the joiner stage is trying to read from another stream. When that happens, the pipeline stalls. 0 To prevent stalls, one inserts between the splitter and joiner stages (on one or more of the streams) a pipeline stage that will unblock the splitter stage by consuming the necessary number of records and holding them until the joiner stage is ready to read them. 0 The number of records that need to be "buffered" in this way varies. In the case with the least requirement for such buffering, the splitter stage writes a record to each of its output streams in rotation; the joiner stage reads records in stream-number order; and none of the intermediate stages delays the records. In this case, the flow of the 1 0 Page 38 Pipe Dreams ------------------------------------------------------------------------ 0 records is not data-dependent, and a stall can be prevented simply by introducing a quantum delay on the low-numbered stream(s), using a copy stage: 0 +----------------------------------------------------------------------+ | | | +----------+ +------+ +----------+ | | | |----| copy |----| | | | | | +------+ | | | | ---| chop | | spec |--- | | | | +------+ | | | | | |----| |----| | | | +----------+ +------+ +----------+ | | | +----------------------------------------------------------------------+ 0 copy is a very simple stage consisting of a loop containing readto and output commands. It does a consuming read to get a record and then copies that record to its output stream. So, the copy stage here consumes the record that chop writes to its primary output stream, freeing chop to write a record to its secondary output stream. Meanwhile, copy writes the first record to its output stream. As a result, spec then finds records available on both of its input streams and consumes them both, freeing chop to write to its primary output stream again and freeing copy to read from that stream again. 0 At the other extreme in the amount of buffering required is the case where the joiner stage is fanin. Because fanin will read no records from its secondary stream until its has read all the records from its primary stream, one must introduce a stage such as buffer to consume all records on the secondary stream and hold them until fanin is ready for them: 0 +----------------------------------------------------------------------+ | | | +----------+ +--------+ +----------+ | | | |---| |---| | | | | | +--------+ | | | | ---| chop | | fanin |--- | | | | +--------+ | | | | | |---| buffer |---| | | | +----------+ +--------+ +----------+ | | | +----------------------------------------------------------------------+ 0 Otherwise, the splitter stage would become blocked trying to write the first record to fanin's secondary input. 0 The same requirement to insert a stage to buffer all the records on a stream may arise because one of the other streams contains a stage, such as sort, that buffers all the records: 1 0 Pipe Dreams Page 39 ------------------------------------------------------------------------ 0 +----------------------------------------------------------------------+ | | | +----------+ +------+ +----------+ | | | |----| sort |----| | | | | | +------+ | | | | ---| chop | | spec |--- | | | | +--------+ | | | | | |---| buffer |---| | | | +----------+ +--------+ +----------+ | | | +----------------------------------------------------------------------+ 0 In the intermediate case, one or more of the streams may need some buffering of records, depending on the order in which the splitter stage decides to write and the order in which the joiner stage decides to read. This is a job for elastic: 0 +----------------------------------------------------------------------+ | | | +--------+ +-------+ +---------+ | | | |-------|elastic|--------| | | | +----+ | | +-------+ | | +----+ | | --|spec|---| fanout | | collate |---|spec|-- | | +----+ | | +-----+ +------+ | | +----+ | | | |---|xlate|---|locate|---| | | | +--------+ +-----+ +------+ +---------+ | | | +----------------------------------------------------------------------+ 0 When elastic has only one input stream, as in this case, it copies its input records to its output, buffering as many as may be necessary to prevent a pipeline stall. It reads input records whenever they become available and writes output records as they are consumed, while attempting to minimize the number of records in its buffer. 0 The diagram above represents this REXX pipeline stage, which does a mixed-case locate: 1 0 Page 40 Pipe Dreams ------------------------------------------------------------------------ 0 +----------------------------------------------------------------------+ | | | /* LOCATEM REXX: Mixed-case LOCATE Stage */ | | | | Parse Upper Arg target /* Upper-case the target. */ | | | | 'CALLPIPE (endchar ?)', | | '*: |', /* Get input from pipeline. */ | | 'spec number 1', /* Prefix each record with... */ | | '1-* next |', /* ...number in columns 1-10. */ | | 'f: fanout |', /* Make two copies of each. */ | | 'elastic |', /* Buffer as required. */ | | 'c: collate 1-10 master |', /* Select matching mixed-case. */ | | 'spec 11-* 1 |', /* Remove record numbers. */ | | '*:', /* Send output to pipeline. */ | | '?', | | 'f: |', /* Second set of records here. */ | | 'xlate upper |', /* Upper-case this set. */ | | 'locate 11-*' target '|', /* Select records with target. */ | | 'c:' /* Send to COLLATE's secondary.*/ | | | +----------------------------------------------------------------------+ 0 The first spec stage prefixes each record received from the calling pipeline with a 10-digit record number. The fanout stage then makes two copies of each record. The copy that is written to fanout's primary output stream is not changed as it passes through this pipeline. The copy that is written to fanout's secondary output stream is translated to upper-case (by the xlate stage) and tested (by the locate stage) to see whether it contains the specified target string (which has also been upper-cased). The collate stage reads records from both of its input streams, to find pairs that have matching record numbers. When collate finds a matching record pair, it copies to its output stream the record that came from its primary input stream (which is the original mixed-case record). (This record will be the mixed-case version of an upper-cased record that was selected by the locate stage, since only the records that locate selects reach collate's secondary input.) collate discards all other input records. The second spec stage removes the record number from the selected records, after which they flow back into the calling pipeline. 0 The elastic stage here prevents a pipeline stall by consuming the records fanout writes on its primary output stream, as it writes them, thus freeing fanout to write to its secondary output stream. The elastic stage then makes the records available to collate whenever it wants them, so that collate always has a record available on its primary input when locate selects another record and writes it to collate's secondary input. 0 I first saw this example at SHARE a few years ago, when Stuart McRae presented it as an argument for a "buffer-as-required" stage, which was not something a user could implement in REXX. Because elastic did not 1 0 Pipe Dreams Page 41 ------------------------------------------------------------------------ 0 exist then, Stuart was forced to use two buffer stages, one in each branch of his pipeline. So, to run his version required enough virtual memory to hold one complete and one partial copy of the input file. Running the new version requires far less memory, because elastic attempts to buffer only as many records as it must to prevent a stall. 0 In general, one can use elastic wherever buffering is needed to prevent a stall. However, if one knows that only a quantum delay is needed, then copy is more efficient than elastic, while if one knows that the entire file must be buffered, then buffer is more efficient than elastic. If the file being processed is large, then it may be worth doing the analysis to distinguish the cases. 0 elastic with two input streams: When elastic has two input streams, its + elastic with two input streams + ______________________________ ___ secondary input is assumed to be fed back from (a derivative of) its primary output stream. This is a feature that bends both pipelines and minds. It allows one to write pipelines that contain recursive sections, as shown here: 0 +----------------------------------------------------------------------+ | | | +---------+ +---------+ | | | | | | | | -------->| elastic |-------->| process |--------> | | +--->| | | |---+ | | | +---------+ +---------+ | | | | | | | +--------------------------------------+ | | | +----------------------------------------------------------------------+ 0 In this diagram, "process" represents a cascade of pipeline stages through which input records (or their derivatives) flow repeatedly. (Of course, "process" must have a way of terminating or of extracting records from the looping flow, so that the pipeline does not loop forever.) 0 When elastic has two input streams, it copies records from its primary input stream to its primary output stream and reads records from its secondary input stream into an internal buffer. When it receives end-of-file on its primary input stream, it begins writing the records from its buffer to its primary output stream, continuing to read into the buffer any records arriving on its secondary input. When the buffer becomes empty, elastic gives up control to allow any other ready stage to run. When it regains control, it determines whether it has any additional secondary input and terminates if it does not, thus forestalling a pipeline stall. 0 WCOUNT EXEC is an example of using elastic with two input streams. It reads a SCRIPT file and all imbedded files (to any nesting level) in order to count the words in a document: 1 0 Page 42 Pipe Dreams ------------------------------------------------------------------------ 0 +----------------------------------------------------------------------+ | | | 'PIPE (endchar ? name WCOUNT)', /* Count words with imbeds. */ | | 'literal' Arg(1) 'script |', /* Name master SCRIPT file. */ | | 'e: elastic |', /* Feedback without stalling. */ | | 'getfile |', /* Read the named files. */ | | 'f: find .im_|', /* Select imbed statements. */ | | 'spec', /* Transform an imbed... */ | | 'word 2 1', /* ...statement into a... */ | | '/script/ nextword |', /* ...filename record. */ | | 'e:', /* Route name to ELASTIC. */ | | '?', | | 'f: |', /* SCRIPT data records to here.*/ | | 'count words |', /* Count the "words". */ | | 'console' /* Display the count. */ | | | +----------------------------------------------------------------------+ 0 The name of the master file flows through the elastic stage to the getfile stage, which reads the file into the pipeline. Then the find stage sends all the records that do not start with a SCRIPT imbed command to its secondary output and all imbed commands to its primary output. The spec stage converts the imbed commands to file names, and those file name records flow back to the elastic stage on its secondary input to begin the process anew. Ultimately, all imbed commands will be found and all imbedded files will be read and all their data records sent to the secondary output of find, at which time the buffer maintained by elastic will become empty, causing elastic to terminate. (Note that because elastic processes all the records on its primary input stream before it processes any of the records on its secondary input stream, the imbedded files follow the master file on the primary output stream; that is, they are not imbedded in place.) 0 In the next example, the screenful of output from a Real-Time Monitor command is examined to see whether there is an indication that more output could be obtained by issuing a NEXT command. If so, a NEXT command is built and fed into the secondary input of elastic to be sent to the SMART virtual machine. The loop terminates when a response from SMART does not contain a line that gets converted into a NEXT command: 1 0 Pipe Dreams Page 43 ------------------------------------------------------------------------ 0 +----------------------------------------------------------------------+ | | | 'PIPE (endchar ?)', /* Drive RTM for full display: */ | | 'literal d user all |', /* First command for RTM. */ | | 'e: elastic |', /* (Makes loop in pipeline.) */ | | 'vmc smart |', /* Commands to RTM via VMCF. */ | | 'f: find ISSUE THE COMMAND "NEXT" |', /* Says it's got more? */ | | 'spec /next/ 1 |', /* Yes, change into NEXT cmd. */ | | 'e:', /* Flow into loop. */ | | '?', | | 'f: |', /* Data lines to here. */ | | 'strip trailing |', /* (To get rid of blank lines. */ | | 'console' /* Display on console. */ | | | +----------------------------------------------------------------------+ 0 Other new stages and subcommands + Other new stages and subcommands + ________________________________ 0 Many very useful new pipeline stages have been added in recent releases of VM/ESA, in addition to the ones I have already mentioned: 0 : abbrev: abbrev is a new selection filter that is documented in the + abbrev + ______ : author's help files. It selects records that begin with an abbreviation : of a specified word. Arguments to abbrev can specify the minimum legal : truncation and whether case is to be respected in matching. This abbrev : stage selects records that start with the characters, "sha", "shar", or : "share": 0 : . . . | abbrev share 3 | . . . 0 : addrdw: addrdw is another new stage documented only in the author's + addrdw + ______ : help files. It prefixes a "record descriptor word" to each record it : reads. It can prefix the traditional OS-style record descriptor word : for variable-length records or a simple 2- or 4-byte length field, with : a value that is either inclusive of or exclusive of the length of the : record descriptor word itself. In the past, one could prefix a two-byte : binary length field to a record by using the v2c conversion routine of : spec, but addrdw is much less expensive. The most typical application : for addrdw is one such as we saw earlier in deblocking CP monitor data, : where each record begins with a length field that is removed during : deblocking and must be restored. At Princeton, we saw a forty percent : reduction in CPU utilization in a filter that processes a heavily-used : accounting file when we replaced this sequence: 0 : 'deblock sf |', /* Get individual records. */ : 'spec 1-* 1 / / next |', /* Restore the length fields. */ : 'spec 1-* v2c 1 |', /* " " " " */ : 'spec 1;-3 1 |', /* " " " " */ 0 : with this one: 1 0 Page 44 Pipe Dreams ------------------------------------------------------------------------ 0 : 'deblock sf |', /* Get individual records. */ : 'addrdw sf |', /* Restore the length fields. */ 0 all: all is a new selection stage that selects records defined by a + all + ___ string expression. all is similar in operation to the ALL XEDIT macro, but allows the construction of more complex expressions. The expressions may contain strings combined with AND, OR, and NOT operators and grouped with parentheses to show precedence: 0 ...| all (/A/ & ^(/B/ ! /C/)) ! /D/ |... 0 (The OR operator can be either an exclamation mark or a vertical bar.) 0 Under the covers, all builds a multi-stream pipeline consisting of a combination of locate and nlocate filters that implement the specified expression, and it then executes that multi-stream pipeline. Where performance is important, the equivalent multi-stream pipeline should be used in preference to all. (However, all has options to write the generated pipeline to its output or to a file, so it can also be used as a pipeline generator.) 0 : beat: beat was added in CMS 12 to provide a "heartbeat monitor" for + beat + ____ : pipelines. You will find it documented in the author's help files. One : specifies a time interval for beat. Then, ordinarily, beat simply : copies its primary input to its primary output, but whenever it fails to : receive a record on its primary input stream within the specified : interval, it writes a record containing its argument string to its : secondary output. It stops as soon as its secondary output becomes : unconnected. Thus, this fragment illustrates a technique for stopping a : pipeline when it has received no input for 5 seconds: 0 : +----------------------------------------------------------------------+ : | | : | 'PIPE (endchar ?)', | : | 'starmsg |', /* Listen for *MSG input. */ | : | 'b: beat 5 /Timed-out/ |', /* Time-out if idle for 5 secs.*/ | : | . . . | : | '?', | : | 'b: |', /* "Timed-out" record to here. */ | : | 'take 1 |', /* Stop when get one of them. */ | : | 'console' /* Display it on the console. */ | : | | : +----------------------------------------------------------------------+ 0 : When beat times out, it writes a record containing the specified string : to its secondary output, where take 1 terminates as soon as it has : received one record. 0 begoutput: The new pipeline command begoutput is used to capture "host + begoutput + _________ commands" issued while a REXX filter is executing. When begoutput is in effect, any statements that REXX does not handle itself are written to the primary output stream, rather than being issued as pipeline 1 0 Pipe Dreams Page 45 ------------------------------------------------------------------------ 0 commands. begoutput mode remains in effect until a "command" is found that matches the delimiter specified by the argument to begoutput (or to end-of-file). So, if this fragment were part of a REXX filter, then the extract and enable subcommands would be written to the output stream of that filter: 0 +----------------------------------------------------------------------+ | | | /* */ "BEGOUTPUT" /* Null string terminates. */ | | 'extract device' /* Issue EXTRACT subcommnd.*/ | | If device.0 > 0 Then Do | | 'enable' device.1 /* Issue ENABLE subcommand.*/ | | If RC = 0 | | Then Say device.1 'has been enabled.' | | End | | '' /* Terminate subcmd burst. */ | | | +----------------------------------------------------------------------+ 0 browse: browse is an experimental device driver that displays its input + browse + ______ on a 3270 terminal (either the virtual machine console or a dialled terminal) with scrolling and searching capability and even a pop-up . window. browse is quite easy to use. You will find documentation for . it in the author's help files. 0 casei and zone: casei is a new stage that is used to execute a + casei and zone + _______________ case-sensitive filter, such as locate or find, in a case-insensitive manner. For example, this stage will select all records that begin with the word "share", regardless of case: 0 . . . | casei find share_| . . . 0 The syntax of casei is: 0 . CASEI filter 0 casei operates by translating both the argument of the specified filter and a copy of the input records to upper-case and then executing the filter. If the zone operand is specified, the filter is applied against . only the specified input range. If the reverse operand is specified, the records are temporarily reversed before the filter is applied. However, the original mixed-case, unreversed records corresponding to those selected or discarded by the specified filter are written to the output streams. 0 zone is a new stage that executes a filter and restricts its operation . to a specified input range. zone can be used to cause a filter such as find, which normally examines the beginning of records, to examine some other portion of the records instead. For example, this pipeline selects all records from the first one that has the word "Appendix" starting in column 40: 0 pipe < plunge listing | zone 40-47 frlabel Appendix| . . . 1 0 Page 46 Pipe Dreams ------------------------------------------------------------------------ 0 zone can also be used to cause a filter to operate on fields at a specified displacement from the ends of records, which is especially + ____ . useful when dealing with records that have unknown or variable lengths. . This filter selects records up to the first one with the string "banana" . beginning in the tenth-to-last column, whichever column that may be: 0 . . . . | zone -10;-5 tolabel banana| . . . 0 . The argument to zone need not be a column range; it can also be a field . range or a word range. This filter selects all records up to the first . one that has a second-to-last word starting with the string "banana": 0 . . . . | zone word -2 tolabel banana| . . . 0 The syntax of zone is: 0 . ZONE inputrange filter 0 In other words, casei and zone are commutative. Either may be specified alone, but they may also be specified together, in which case the results are the same regardless of their order. In whichever order casei and zone are specified, if both are specified then the order of the operations is always to select the data from the input record, then to upper-case the data, and then (optionally) to reverse the data. 0 Given the capability to specify offsets from the ends of records, one seldom needs to use the reverse option, but here is an example in which reverse is needed: 0 . . . | zone -3;-1 reverse between /NO/ /FFO/ | . . . 0 (Hint: this stage selects groups of records beginning with a record that ends with "ON" and ending with a record that ends with "OFF".) 0 : combine: combine was added in CMS 10 and enhanced in CMS 11 and CMS 12. + combine + _______ It combines data from groups of records into single records. The records in each group may be ORed together, ANDed, or XORed. Or each column in the output record may contain the first occurrence of a column in the group or the last occurrence of a column in the group. 1 0 Pipe Dreams Page 47 ------------------------------------------------------------------------ 0 +----------------------------------------------------------------------+ | | . | COMBINE [ | * | KEYLENGTH n] {FIRST | LAST | X | N | O} | | | | Number of additional records. Default is 1 to | | process two records. | . | KEYLENgth Combine records that start with same string. | | FIRST Retain the first occurrence of a column. | | LAST Retain the last occurrence of a column. | | X Exclusive Or | | N And | | O Or | | | +----------------------------------------------------------------------+ 0 There are many uses for combine. It was written originally for processing image data, but plumbers quickly noticed, for example, that the X option could be used to test for equality. 0 . In CMS 11, combine was enhanced to have a new option, keylength, which . specifies the number of leading columns that contain the key. With this . option in effect, runs of records that have the same key are combined. . This example of using combine keylength merges records from three files . to produce a single record for each Social Security number represented . in any of the files: 0 . +----------------------------------------------------------------------+ . | | . | /* FORMIKE EXEC: Merge records with same SSN from three files */ | . | | . | 'PIPE (endchar ?)', | . | '< mike file1 |', /* Read in first file. */ | . | 'append < mike file3 |', /* And third file. */ | . | 'f: fanin |', /* And the other files. */ | . | 'sort 1-* 1.9 |', /* Sort on SSNs in 1-9. */ | . | 'xlate 10-* 40 00 |', /* (So OR won't upper-case.) */ | . | 'combine keylength 9 OR |', /* OR together matching recs.*/ | . | 'xlate 10-* 00 40 |', /* Restore blanks. */ | . | '> output file a', /* Write merged records. */ | . | '?', | . | '< mike file2 |', /* Read in second file. */ | . | 'spec 2.9 1 41-* 41 35.5 70 |', /* (So no overlap later.) */ | . | 'f:' /* Route into FANIN. */ | . | | . +----------------------------------------------------------------------+ 0 . By the time they reach the combine stage, the records from the three . files have the Social Security number in columns 1-9, and they have been . sorted on that field. They have also been formatted so that each record . type has binary zeros where the other record types have character data . fields. Thus, ORing the records for a given key together produces a . single record containing all available data fields. 1 0 Page 48 Pipe Dreams ------------------------------------------------------------------------ 0 : In CMS 12, combine supports a secondary input stream, in which case it : combines pairs of records, one from each of its input streams. This : simple example masks the data in variable-length records by : Exclusive-ORing each of them with a record of the same length containing : hexadecimal FFs: 0 : +----------------------------------------------------------------------+ : | | : | 'CALLPIPE (endchar ?)', /* Mask data with FFs: */ | : | '*: |', /* Input from pipeline. */ | : | 'o: fanout |', /* Divert a copy. */ | : | 'copy |', /* (To prevent a stall.) */ | : | 'c: combine X |', /* Exclusive-OR w/secondary. */ | : | '*:', /* Output to pipeline. */ | : | '?', | : | 'o: |', /* Second copy to here. */ | : | 'xlate *-* 00-FF FF |', /* Convert to all FFs. */ | : | 'c:' /* Feed to COMBINE. */ | : | | : +----------------------------------------------------------------------+ 0 : A second copy of each input record is diverted to the second pipeline : segment, where it is translated entirely to hexadecimal FFs and is then : routed to the secondary input of combine to be Exclusive-ORed with the : first copy of the same record. 0 : deal and gather: The deal stage added in CMS 12 implements a function + deal and gather + _______________ : that plumbers have frequently written in REXX in the past. It reads : records from its primary input stream and writes them to its output : streams in round-robin fashion. Alternatively, it can read stream : identifiers from its secondary input and write the corresponding records : from its primary input to the specified output streams. When the : keyword key is specified, deal recognizes groups of records as runs of : records that are the same in the specified range and writes all the : records in a group to one output stream and all the records in the next : group to the next output stream and so forth. 0 : gather performs the inverse operation, reading records from its input : streams in turn and writing them to its primary output stream. : Alternatively, it can read stream identifiers from its primary stream : and use those to select the next input stream to read from. (If the : stream identifier resolves to 0, the stream identifier record itself is : written to the output.) 0 : One very simple use I have found for deal is to have it write the : matching detail and master records produced by lookup to different : streams: 1 0 Pipe Dreams Page 49 ------------------------------------------------------------------------ 0 : +----------------------------------------------------------------------+ : | | : | . . . /* Match records and messages: */ | : | 'l: lookup 7.2 1.2 |', /* Select suspension records. */ | : | 'd: deal |', /* Divert masters (messages). */ | : | . . . /* Process details (records). */ | : | '?', | : | 'stem nasty. |', /* Table of nasty-record msgs. */ | : | 'l:', /* Into LOOKUP as masters. */ | : | '?', | : | 'd: |', /* Suspension messages to here.*/ | : | 'spec /MDATPEEK:/ 1 4-* nw |', /* Format the messages. */ | : | 'console' /* Display the messages. */ | : | | : +----------------------------------------------------------------------+ 0 : The primary output stream of lookup contains pairs of detail records and : master records. As used here, deal writes a detail record to its : primary output and then a master record to its secondary and then a : detail to its primary, and so on. 0 . dfsort: dfsort is an experimental stage that is documented in the + dfsort + ______ . author's help files. It invokes DF/SORT CMS, passing its argument string as the sort control statement; for example: 0 . . . | dfsort option nolist sort fields=(5,1,ch,a) | . . . 0 escape: escape is a new stage that inserts escape characters into the + escape + ______ records that pass through it, so that any special characters in those records will be treated as data later on. The argument to escape is the escape character followed by a list of the other characters that must be "escaped". If the argument is not specified, it defaults to a double quote, a vertical bar, and a back-slant. 0 The most common reason to need an escape stage is to prepare pipeline stages that have been built from user data for insertion into a pipeline. escape is used to make sure that the user data do not cause errors in the pipeline because they contain characters that the pipeline scanner regards as special. In a typical example, a service machine is configured via a parameter file called SERVER PARMS, a portion of which is shown here: 0 +----------------------------------------------------------------------+ | | | driver = BITFTP | | location = North America | | owner = MAINT@PUCC | | | +----------------------------------------------------------------------+ 0 During its initialization, the server tailors its messages by filling in values from this parameter file. So, for example, wherever "&driver" 1 0 Page 50 Pipe Dreams ------------------------------------------------------------------------ 0 appears in a message text, it is replaced by the value specified for the parameter "driver". The messages are in files such as LOWRATE MSGTEXT, which contains several fields for which a parameter value should be substituted: 0 +----------------------------------------------------------------------+ | | | You will note that the rate of transfer between &driver and the | | host you are FTPing from is quite low. If possible, please use | | a different host in the future. (If you have any questions about | | this, they may be addressed to &owner.) &driver is located in | | &location and thus has the best connectivity to hosts that are | | also in &location. | | | +----------------------------------------------------------------------+ 0 To tailor the messages, the server's initialization routine reads the parameter file and converts its contents into a series of change stages, of the form: 0 change =&driver=BITFTP= 0 Since these change stages will be inserted into a pipeline that uses "|" as its stage separator, "@" as its end character, and """ as its escape character, these three characters must be escaped in the arguments of the change stages. This pipeline, which builds the change stages, uses several of the new features of Pipes, in addition to the escape stage: + _____ 0 +----------------------------------------------------------------------+ | | | 'PIPE', /* Build CHANGEs from parms: */ | | '< server parms |', /* Read parameter file. */ | | 'spec fieldsep =', /* Fields separated by "=". */ | | '/change =&/ 1', /* Format CHANGE stage: */ | | 'field1 strip next', /* parameter name, */ | | '/=/ next', /* delimiter (=), */ | | 'field2 strip next', /* parameter value, */ | | '/=/ next |', /* delimiter (=). */ | | 'escape /"||@/ |', /* Make safe to run in pipe. */ | | 'join * / || / |', /* Join stages together. */ | | 'var changes' /* Save stages in variable. */ | | | +----------------------------------------------------------------------+ 0 In the spec stage, the new fieldseparator keyword is used to specify that the fields in the input record (from the parameter file) are delimited by equal signs. This makes picking out the parameter names and the parameter values very easy; they are simply field1 and field2. This spec stage also uses the new strip keyword to specify that the fields are to be stripped of leading and trailing blanks before being placed in the output record. Another new feature is used in the escape stage; the vertical bar in its argument string is escaped by being 1 0 Pipe Dreams Page 51 ------------------------------------------------------------------------ 0 doubled (so that it will not be mistaken for a stage separator in this pipeline). 0 The escape stage processes the newly built change stages so that anywhere in their arguments that the characters double quote, vertical bar, or at-sign appear, they are preceded by a double quote. Then the join stage joins all these new stages together separated by vertical bars. (The vertical bar in the argument to join is also escaped in this pipeline by being doubled; it is not escaped in the pipeline segment being constructed, because it is intended to be a stage separator there.) 0 Then, once the entire parameter file has been converted into change stages and all of those stages have been joined into a segment of pipeline, that segment is stored in the REXX variable "changes". The server can safely use the "changes" pipeline segment whenever it needs to tailor one of its messages: 0 +----------------------------------------------------------------------+ | | | /* Tailor a message by substituting parm values for "&name". */ | | | | 'PIPE (escape " endchar @)', /* Apply CHANGEs to message: */ | | '< lowrate msgtext |', /* Read in a message text. */ | | changes '|', /* Substitute parameters. */ | | 'stem lowratemsg.' /* Save tailored message. */ | | | +----------------------------------------------------------------------+ 0 : fanintwo: fanintwo is described in the author's help file as an "arcane + fanintwo + ________ : gateway", which is rather a good description. fanintwo is a two-input : faninany that gives priority to the secondary input. When a record : becomes available on fanintwo's primary input stream, it suspends itself : and then tests for whether a record has become available on its : secondary input stream. If one has, it is written to the output before : the record from the primary input. When fanintwo has written a record : from its primary input, it passes any records it can from its secondary : input before consuming the record on the primary input, thereby keeping : the primary-stream producer blocked. Thus, fanintwo "is useful for : closing an inner feedback loop where the feedback should have priority : over the input from outside the loop". In other words, it is just : lovely for writing recursive pipeline filters! 0 : This recursive filter, DOTIMBED REXX, reads a SCRIPT file and replaces : each occurrence of a ".im" statement with the specified file, continuing : the process to any level of nesting: 1 0 Page 52 Pipe Dreams ------------------------------------------------------------------------ 0 : +----------------------------------------------------------------------+ : | | : | /* DOTIMBED REXX: Imbed SCRIPT .im files in place */ | : | | : | 'PEEKTO' /* At end of recursion? */ | : | If RC ^= 0 Then Exit 0 /* Yes, exit. */ | : | | : | 'CALLPIPE (name DotIM endchar ?)', | : | 'y: faninany |', /* My non-imbed lines. */ | : | 'i: fanintwo |', /* Plus non-imbeds from lower. */ | : | '*.output:', /* Output to caller. */ | : | '?', | : | '*.input: |', /* Input from caller. */ | : | 'f: find .im_|', /* Divert if not imbed. */ | : | 'spec w2 1 /SCRIPT/ nw |', /* Convert .im to file name. */ | : | 'getfile |', /* Read the SCRIPT file. */ | : | 'dotimbed |', /* Imbed recursively. */ | : | 'i:', /* Shunt lower non-imbed lines.*/ | : | '?', | : | 'f: | y:' /* Shunt my non-imbed lines. */ | : | | : +----------------------------------------------------------------------+ 0 : Records received from the caller are tested for being ".im" statements. : Those that are not ".im" statements are routed to the faninany on : fanintwo's primary input. The ".im" statements are converted to the : name of the corresponding SCRIPT file, which is then read by the getfile : stage and passed to a lower-level dotimbed stage. Each lower-level : dotimbed stage writes records that are not ".im" statements to its : primary output stream, which is directed to the secondary input of the : fanintwo gateway in the calling dotimbed, thus giving precedence to : records produced by the stages lower in the recursion, which has the : effect of imbedding the files "in place". 0 : (CMS 12 contains an even more arcane gateway called faninslow, which is : "the fastest way to copy a record". This is not documented even in the : author's help files.) 0 : fanoutwo: fanoutwo is another new gateway documented only in the + fanoutwo + ________ : author's help files. It is a fairly straightforward variant of fanout : that can be used to propagate end-of-file backwards as soon as the : primary output stream goes to end-of-file. fanout with the stop anyeof : option writes an output record to each stream and then checks whether : any of them has gone to end-of-file (that is, whether any of the stages : reading from fanout's output streams has terminated). The result is : that when the primary output stream goes to end-of-file, the secondary : output stream receives one more record than the primary consumer has : accepted. If that is not desirable, fanout stop anyeof should be : replaced with fanoutwo. 0 : In this example, the primary output stream of fanoutwo is connected to : the output of the subroutine pipeline, and its secondary output stream 1 0 Pipe Dreams Page 53 ------------------------------------------------------------------------ 0 : is connected to a series of stages that categorize and count the records : being processed: 0 : +----------------------------------------------------------------------+ : | | : | 'CALLPIPE (endchar ?)', | : | . . . | : | 'o: fanoutwo |', /* Divert records to count. */ | : | '*:', /* Output to calling pipeline. */ | : | '?', | : | 'o: |', /* Second copy of data recs. */ | : | 'sort count 5.1 |', /* Count of each domain type. */ | : | 'spec', /* Format to load into REXX: */ | : | '/=NRECS./ 1', | : | '15.1 c2d next', /* =NRECS.n=count */ | : | '/=/ next', | : | '1.10 strip next |', | : | 'change / // |', /* Why can't STRIP do this? */ | : | 'varset' /* Store counts by domain. */ | : | | : +----------------------------------------------------------------------+ 0 : If the calling pipeline stops accepting output from this subroutine, the : counts calculated in the second pipeline segment should not include the : record that was being written when the end-of-file was encountered, as : that record was never received by the calling pipeline. fanoutwo : prevents the sort count from getting more records than are actually : written to the calling pipeline. 0 | The eofback stage, which has been added since CMS 12, runs a device | driver with end-of-file being propagated backwards. It uses fanoutwo | under the covers. 0 frtarget and totarget: frtarget and totarget are indispensable new + frtarget and totarget + ______________________ stages that were added in CMS 10. The idea behind them is that they invoke a selection stage and watch as it reads the input stream and selects or discards records by its own selection criteria. When the target stage first selects a record, frtarget and totarget are triggered, respectively, to select or discard all subsequent records. totarget runs its target stage and selects all records until the target stage produces an output record on its primary output stream. That record and all subsequent records are rejected (or written to the secondary output). Thus, this totarget stage copies all records up until the first record that contains the string "abc": 0 . . . | totarget locate /abc/ | . . . 0 frtarget runs its target stage and rejects all records until the first one that the target stage produces on its primary output stream. That record and all subsequent records are selected. Thus, this frtarget stage discards everything before the first record that contains the string "abc": 1 0 Page 54 Pipe Dreams ------------------------------------------------------------------------ 0 . . . | frtarget locate /abc/ | . . . 0 frtarget and totarget behave like other partitioning selection stages. (Partitioning stages are stages, such as take and tolabel, that divide their input stream in two.) So, for example, when totarget is used in a subroutine pipeline, it can "sip" from the input stream, leaving the record that causes a match (and all subsequent records) in the input stream for later processing. Thus, a REXX stage that contains this callpipe command: 0 'CALLPIPE *: | totarget nlocate 1 | *:' 0 might be used to split a mail file into two portions, the header and the body. The Internet mail standard RFC822 specifies that the mail headers (which contain the mail addressing information) are separated from the mail body (which contains the sender's message) by a null line. Thus, this subroutine pipeline would split a mail file according to that standard by selecting all records up until the first that is not at least one byte long. That is, totarget would select all records until the one that nlocate 1 selects, which would be the first record shorter than one byte, the null record. Thus, the header records would be copied to the output stream, but the null record and the mail body would be left in the input stream for processing by other commands later in the stage that issued this callpipe. 0 : joincont: joincont, which was new in CMS 10, has been enhanced in + joincont + ________ : CMS 12 with the new anyof and keep options. joincont joins continuation lines according to the specified rules: 0 +----------------------------------------------------------------------+ | | : | JOINCONT [NOT] [TRAILING|LEADING] [ANYOF] [KEEP] | | [] | | | +----------------------------------------------------------------------+ 0 Continuation lines can be marked by a trailing string in the first record or a leading string in the second record (or the absence of such : a string when not is specified). When the anyof option is used, : continuations can be indicated by any of the characters in the first : argument string (or the absence of any of those characters when not is : specified). The continuation string or character is deleted when the : records are joined, unless keep is specified. If a second string is specified in the arguments to joincont, it is inserted between joined records. 0 joincont is used here to "unfold" the continued header lines in RFC822-format mail: 1 0 Pipe Dreams Page 55 ------------------------------------------------------------------------ 0 +----------------------------------------------------------------------+ | | | /* 822XPND REXX: Unfold RFC822-format mail headers. */ | | | | 'CALLPIPE', | | '*: |', /* Connect to calling pipeline.*/ | | 'xlate *-* 05 40 |', /* Make white space whiter. */ | | 'strip trailing |', /* Remove trailing LWSP-chars. */ | | 'totarget nlocate 1 |', /* Stop at the null line. */ | : | 'joincont leading / / keep |',/* Unfold the header lines. */ | | '*:' /* Put them into the pipeline. */ | | | +----------------------------------------------------------------------+ 0 In RFC822-format mail, a mail header line is continued if the next line starts with "white space". The joincont stage here joins a continued : line to the preceding line, leaving the leading blank that indicated the : continuation, so that fields within the header line remain separated by white space, as they should be. 0 (Here, the totarget nlocate 1 stage we discussed above prevents this subroutine pipeline from reading beyond the end of the mail headers, so that the body of the mail file is not subjected to the unfolding operation.) 0 : This subroutine pipeline (from Joachim Becela, of IBM) has become the : canonical method for performing the equivalent of a REXX Space() : function in a pipeline: 0 : +----------------------------------------------------------------------+ : | | : | 'CALLPIPE (name Space)', | : | '*: |', /* Input from pipeline. */ | : | 'tokenise / / / / |', /* Split and delimit. */ | : | 'joincont not leading / / / / |', /* Rejoin Space'd. */ | : | 'strip leading |', /* Remove delimiting blanks. */ | : | 'locate 1 |', /* Discard last delimiter. */ | : | '*:' /* Output to pipeline. */ | : | | : +----------------------------------------------------------------------+ 0 : Although tokenise has been around for a long time, you may not be : familiar with it. tokenise breaks its input up at word boundaries and : writes a record containing only its second argument string after the set : of records derived from a single input record. (When the first argument : to tokenise is a blank, it has no effect.) Thus, the tokenise here : behaves like split, with the difference that it inserts a record : containing a single blank to delimit the groups of output records : derived from individual input records. The joincont joins records that : don't contain a leading blank to the record before them, inserting a : blank at the join. This restores the original records, but with any : multiple blanks replaced by a single blank. All but the first record 1 0 Page 56 Pipe Dreams ------------------------------------------------------------------------ 0 : have one leading blank, the delimiter for the previous group of records. : That is removed by the strip leading, which also makes the final : delimiter record null so that it can be discarded by locate 1. 0 | joincont has been further enhanced since CMS 12 to support an anycase | keyword and a column keyword, which takes an input range as its | argument. column is for use when the continuation indicator is neither | at the beginning nor at the end of the records. 0 juxtapose: juxtapose is one of my favorites of the new pipeline stages. + juxtapose + _________ It makes some things that were formerly difficult very easy to do. juxtapose is used to preface a record with a marker. It reads records from its primary and secondary input streams as they become available. Each record from the primary input stream is stored in a one-record buffer, replacing the previous contents of that buffer; the input record is then discarded. When a record is read from the secondary input stream, it is prefixed by the current contents of the buffer, and the combined record is written to the primary output stream. juxtapose is the stage to use when you need to associate each record on one stream with a variable number of records on another stream. Plumbers have been finding more and more ways to use juxtapose. 0 In this example, a CP directory is read and the userids from the USER cards are fed into the primary input stream for juxtapose. The MDISK cards are fed into its secondary input stream, causing juxtapose to prefix the corresponding userid to each MDISK card before writing it out on its primary output stream, no matter how many MDISKs there are per USER: 0 +----------------------------------------------------------------------+ | | | 'PIPE (endchar ?)' , /* Pair userid with MDISKs: */ | | '<' fn 'direct |' , /* Read the directory. */ | | 'l: nfind ____MDISK_|' , /* Divert MDISK statements */ | | 'find USER_|' , /* Select USER statements. */ | | 'spec word 2 1.8 |' , /* Isolate userid. */ | | 'j: juxtapose |' , /* Preface MDISK with userid. */ | | . . . | | '?', | | 'l: |' , /* MDISK statements to here. */ | | 'j:' /* Go merge with USER stmts. */ | | | +----------------------------------------------------------------------+ 0 This example from John Hartmann creates the "outer product" of two files; that is, it pairs each record from one file with every record in another. instore creates a single record describing the second file, and that record is then duplicated as many times as necessary to allow synchronise to pair it with each record from the first file. As juxtapose reads each record from the first file into its buffer, outstore expands one copy of the descriptor record into a full copy of the second file, and each of those records is read by juxtapose and prefixed with the record in its buffer: 1 0 Pipe Dreams Page 57 ------------------------------------------------------------------------ 0 +----------------------------------------------------------------------+ | | | /* Pair each record of a file with all records from another */ | | | | 'CALLPIPE (endchar ?)' , | | '*.input.0: |' , /* Primary input stream. */ | | 's: synchronise |' , /* Synchronize the streams. */ | | 'j: juxtapose |' , /* Prefix record to records. */ | | '*:' , /* Primary output stream. */ | | '?', | | '*.input.1: |' , /* Secondary input stream. */ | | 'instore |' , /* Crunch file to one record. */ | | 'duplicate * |' , /* Copy the file descriptor. */ | | 's: |' , /* One for each primary rec. */ | | 'outstore |' , /* Expand to whole file. */ | | 'j:' /* Prefix with primary record.*/ | | | +----------------------------------------------------------------------+ 0 The next example (which comes from Rob van der Heij) sorts the words within records. First, a copy is made of each record. Then the first copy is turned into a sequence number, which is fed into juxtapose's primary input, while the second copy is split into one-word records that become juxtapose's secondary input. juxtapose reads the sequence number records into its buffer as they arrive and prefixes each word record with the current contents of that buffer, so that we get a stream of records each consisting of a single word prefixed by the sequence number of the input record that contained that word. Those records are sorted on both the number and the word, so that the words from input record 2 still follow the words from input record 1 and precede the words from input record 3, but are sorted within their set. unique first selects the first record of each set and diverts the others. The spec stages in the two streams remove the sequence numbers, and the first record of each set is prefixed by a binary zero (which is a character known not to exist in the data). The two streams are merged, and deblock linend joins the records together and splits them at the binary zeros, thus restoring the original record boundaries: 1 0 Page 58 Pipe Dreams ------------------------------------------------------------------------ 0 +----------------------------------------------------------------------+ | | | 'CALLPIPE (endchar %)' , /* Sort words in each record: */ | | '*: |' , /* Input from caller. */ | | 'fo: fanout |' , /* Make a second copy. */ | | 'spec number 1.10 |' , /* Make recs with just number. */ | | 'j: juxtapose |' , /* Prefix number to each word. */ | | 'sort |' , /* Sort on number and word. */ | | 'u: unique 1.10 first |' , /* First word for each number. */ | | 'spec x00 1 11-* 2 |' , /* Prefix unique character. */ | | 'fi: faninany |' , /* Merge two streams. */ | | 'deblock linend 00 |' , /* Original record boundaries. */ | | '*:' , /* Output to caller. */ | | '%', | | 'fo: |' , /* Second copy to here. */ | | 'split |' , /* One record per word. */ | | 'j:' , /* Go prefix with number. */ | | '%', | | 'u: |' , /* 2nd-nth words for each num. */ | | 'spec 11-* 2 |' , /* Remove number, prefix blank.*/ | | 'fi:' /* Merge with other records. */ | | | +----------------------------------------------------------------------+ 0 listpds and snake: listpds was added in CMS 10. It lists the members + listpds and snake + _________________ of a partitioned dataset: 0 +----------------------------------------------------------------------+ | | | pipe listpds hcpgpi maclib | chop 8 | sort | pad 10 | snake 7 | cons | | | | APPCVM DEFWORKA HCPBPLBK HCPMWTBK HCPSXIBK IUCV VMCPARM | | CPED DF8PARM HCPCALL HCPREGCK HCPSXOBK SFBLOK VRDCBLOK | | DD4PARM0 HCPATTOP HCPGPI HCPSBIOP IPARML SPLINK | | DD8PARM0 HCPATTRB HCPMDLAT HCPSGIOP IPARMLX VMCMHDR | | | +----------------------------------------------------------------------+ 0 snake is a new stage that can be used to build multi-column page layouts. snake breaks the input file into columns of a specified depth and pastes the columns together side-by-side; thus, the input file wiggles its way across the page like a snake. In the example above, the argument "7" tells snake to form the records into seven columns. Because the input records were sorted, the output columns are sorted downwards. snake accepts an optional second argument, a number specifying how many lines there are on a "page", so that once a column has gone as deep as a page, the next record will be put into the next column (or into the first column of the next page). 0 This clever little REXX stage (by Wayne Pascal, of IBM) is another interesting use of elastic with two input streams. It reverses the effect of a (one-argument) snake stage by converting multi-column output back to a single column: 1 0 Pipe Dreams Page 59 ------------------------------------------------------------------------ 0 +----------------------------------------------------------------------+ | | | /* UNSNAKE REXX: Convert multi-column data to single column */ | | | | Parse Arg width . /* Argument is column width. */ | | | | 'CALLPIPE (endchar ?)', /* Multiple columns to one: */ | | '*: |', /* Columnar records from pipe. */ | | 'a: elastic |', /* Make a looping pipeline. */ | | 'b: chop' width '|', /* Chop at specified column. */ | | '*:', /* First piece back into pipe. */ | | '?', | | 'b: |', /* Chopped-off piece to here. */ | | 'locate 1 |', /* Discard it, if it's null. */ | | 'a:' /* Route to the ELASTIC stage. */ | | | +----------------------------------------------------------------------+ 0 The first column (of "width" characters) of each record is chopped off and sent to the calling pipeline. The remainder of each record is written to the secondary output of chop. If it is a null record, it is discarded; otherwise, it is routed back to the secondary input of the elastic stage, to go through the cycle again, until all of the columns have been "peeled" off all the records. 0 not: not is a handy little stage that runs another stage with its + not + ___ output streams cross-connected. Typically, it is used to run a selection stage when one wants only the secondary output from that stage. Running the selection stage with not causes the secondary output of the selection stage to be written on the primary output of the not stage, and vice versa, so if you don't want the primary output of the + __________ selection stage, you simply don't define a secondary output for the not stage. In other words, this is a way of avoiding building a multi-stream pipeline when you don't really need one. For example, to select only the nodeids from records containing e-mail addresses of the form "userid@nodeid", one could use: 0 . . . | not chop after @ | . . . 0 By itself, chop after @ would write the userid to its primary output stream and the nodeid to its secondary output stream. Prefaced by not and with no secondary stream defined, it writes only the nodeids. 0 not can be used with state when one wants to know only the files that don't exist. The next example is an EXEC that is called to determine the first non-existent file with a name of the form "filename RITAnnn", where "nnn" is a 3-digit sequence number. Its return code is a number between 1 and 999 specifying the lowest-numbered such file that does not exist. The input to not state consists of records containing all possible file names of the desired form, in numeric sequence. The quiet option specifies that the return code from state is to be zero even when a file does not exist, and the nodetails option specifies that the 1 0 Page 60 Pipe Dreams ------------------------------------------------------------------------ 0 output of state is to be file names, not file descriptions. Thus, the output from not state consists of records containing the names of the non-existent files. The first of those records is selected; the filetype sequence number is extracted, and the return code from the EXEC is set to that number: 0 +----------------------------------------------------------------------+ | | | /* GETNXTFT EXEC: Get lowest unused filetype sequence number */ | | | | 'PIPE', | | 'literal' Arg(1) 'RITA|', /* Filename and skeleton type. */ | | 'dup 998 |', /* So can make 001-999. */ | | 'spec', /* Format file name: */ | | '1-* 1', /* "filename RITA". */ | | 'pad 0', /* (For high-order 0's.) */ | | 'number next.3 right |', /* "filename RITAnnn". */ | | 'not state nodetails quiet |',/* Select nonexistent files. */ | | 'take 1 |', /* Need only the first one. */ | | 'spec -3;-1 1 |', /* Isolate sequence number. */ | | 'append literal 1 |', /* (In case all files exist.) */ | | 'aggrc' /* Set RC to sequence number. */ | | | | Exit RC /* Return sequence number. */ | | | +----------------------------------------------------------------------+ 0 This example reformats messages that begin with a number formatted by a REXX Format(n,12,3) function. The first six bytes of the number field are removed, but a warning is appended if any number is truncated. This example illustrates the point that juxtapose produces output only when it gets input from its secondary stream. Thus, the null records corresponding to non-truncated numbers must be fed to juxtapose so that it will write the corresponding records from its buffer: 1 0 Pipe Dreams Page 61 ------------------------------------------------------------------------ 0 +----------------------------------------------------------------------+ | | | 'CALLPIPE (endchar ?)' , /* Format end accounting msgs: */ | | 'stem endmsg. |' , /* Numbers from Format(n,12,3).*/ | | . . . /* Massage message text. */ | | 'c: not chop 6 |' , /* Align with "total" column. */ | | 'j: juxtapose |' , /* Mark overflow, if any. */ | | '*:' , /* Feed into statistics output.*/ | | '?', | | 'c: |' , /* High-order digits to here. */ | | 'strip |' , /* Make it null if no digits. */ | | 'n: nlocate 1 |' , /* Divert any non-null records.*/ | | 'f: faninany |' , /* Pull non-nulls back in. */ | | 'j:' , /* Append to output line. */ | | '?', | | 'n: |' , /* High-order digits to here. */ | | 'spec /(*overflowed*)/ 3 |', /* Warn if anything left. */ | | 'f:' /* Append to output line. */ | | | +----------------------------------------------------------------------+ 0 : parcel and random: parcel and random are two new stages in CMS 12 that + parcel and random + _________________ : you will find described in the author's help files. parcel treats its : primary input as a stream of bytes, ignoring record boundaries. It : reads records containing numbers from its secondary input and chops the : byte-stream from its primary input into records of the specified sizes. : random writes four-byte output records containing pseudo-random numbers. : It accepts a modulus value and a seed as arguments. This example uses : both of these new stages to generate test data containing records of : random lengths between 1 and 80 bytes: 0 : +----------------------------------------------------------------------+ : | | : | 'PIPE (endchar ?)', /* Generate random-length recs:*/ | : | '< lyrics file a |', /* Read input data. */ | : | 'p: parcel |', /* Chop to specified lengths. */ | : | '> test data a', /* Write output file. */ | : | '?', | : | 'random 81 |', /* Random between 0 and 80. */ | : | 'spec 1-* c2d 1 |', /* Convert to decimal. */ | : | 'p:' /* Series of record lengths. */ | : | | : +----------------------------------------------------------------------+ 0 : With a modulus value of 81, random writes records containing binary : numbers between 0 and 80. Once spec has converted the values in those : records to decimal, parcel can use them to decide how many bytes to : write to its output stream at a time. If it receives a value of 0, it : will write a null record, but the disk write stage will discard that, so : all the records in the file TEST DATA will be between 1 and 80 bytes : long. 1 0 Page 62 Pipe Dreams ------------------------------------------------------------------------ 0 . pick: pick is the most important stage added in CMS 11. It is both + pick + ____ . very simple and very powerful. pick selects records that satisfy a . specified relation. It can be used to compare two fields within each . record or to compare a field in each record with a constant. pick's . syntax is: 0 . +----------------------------------------------------------------------+ . | | . | PICK [NOPAD | PAD ] | . | {|} {|} | . | | . | is one of == ^== >> >>= << <<=. /== and \== are | . | synonyms for ^==. | . | | . +----------------------------------------------------------------------+ 0 . All of the REXX strict operators are allowed. The fields in the records . can be defined with the full "inputrange" syntax (column ranges, field . ranges, or word ranges, relative to either the beginning or the end of . the record). For example, this filter processes records that have a . four-character timestamp in the first four columns and selects those . with times before 8 am: 0 . . . . | pick 1.4 << "0800" | . . . 0 . And this cascade of filters selects records with times between 8am and . 5pm: 0 . . . . | pick 1.4 >>= "0800" | pick 1.4 <<= "1659" | . . . 0 . This example selects records in which the second word is identical to . the fourth word (or both are null): 0 . . . . | pick word 2 == word 4 | . . . 0 . While this example selects records that contain at least two words: 0 . . . . | pick word 2 \== // | . . . 0 . And this example selects records for which the 14-character field . starting in column 23 is strictly greater than the 14-character field . starting in column 57: 0 . . . . | pick 23.14 >> 57.14 | . . . 0 | piptstkw: piptstkw is a new pipeline command (available in REXX + piptstkw + ________ | filters) to test whether the string specified in its arguments can be | decoded as the keyword specified in its arguments. 1 0 Pipe Dreams Page 63 ------------------------------------------------------------------------ 0 predselect: predselect ("predicate select") is a tool for building + predselect + __________ high-performance selection stages that require destructive testing of their input records. A plumber in Japan reports that rewriting his DBCS selection filters to use predselect speeded them up by thirty-three percent. 0 predselect copies a record from its primary input to its primary output whenever it receives a record on its secondary input. It copies a record from its primary input to its secondary output whenever it receives a record on its tertiary input. The idea behind predselect is that its primary input will contain the original, unmodified records. Its secondary and tertiary inputs will be from a second copy of the original records, one that has been modified as necessary to decide whether the records should be selected or rejected. Receiving a record on its secondary or tertiary input, then, is a signal for how it should treat the corresponding unmodified record from its primary input. 0 ANYCASE REXX executes a passed selection stage, which might be, say, "find banana", in a case-insensitive manner by first upper-casing both the passed stage and its input: 0 +----------------------------------------------------------------------+ | | | /* ANYCASE REXX: Execute passed selection stage ignoring case */ | | | | Parse Upper Arg stage /* Upper-case stage and parms. */ | | | | 'CALLPIPE (endchar ?)' , | | '*.input: |' , /* Input from pipeline. */ | | 'f: fanout |' , /* Make second copy of input. */ | | 'p: predselect |' , /* Select/reject from primary. */ | | '*.output.0:' , /* Selected records on primary.*/ | | '?', | | 'f: |' , /* Second copy of input here. */ | | 'xlate upper |' , /* Upper-case this set of recs.*/ | | 's:' stage'|' , /* Perform upper-cased test. */ | | 'p: |' , /* To/from predselect's 2ry. */ | | '*.output.1:' , /* Rejected recs on secondary. */ | | '?', | | 's: |' , /* Records rejected by "stage".*/ | | 'p:' /* To predselect's 3ry input. */ | | | +----------------------------------------------------------------------+ 0 The subroutine pipeline reads the records from the main pipeline and makes two copies of each. The second copy of each record is upper-cased and then read into the passed stage, which selects or rejects it. The records selected by the passed stage are sent to the secondary input of predselect, signalling it to select the corresponding unmodified record from its primary input stream. The records rejected by the passed stage are sent to the tertiary input of predselect, signalling it to reject the corresponding unmodified record. 1 0 Page 64 Pipe Dreams ------------------------------------------------------------------------ 0 reverse: reverse is a new stage that facilitates the old plumber's + reverse + _______ trick of reversing records to examine their ends more easily. For example, the following fragment is from a pipeline that processes records that end in an Internet-style network address. The records are reversed so that find can be used to select those that end with ".ibm.com". The selected records are then restored by reversing them a second time: 0 +----------------------------------------------------------------------+ | | | 'PIPE', | | . . . | | 'reverse |', /* Reverse contents of record. */ | | 'find moc.mbi.|', /* Select records for IBM users. */ | | 'reverse |', /* Restore contents of record. */ | | . . . | | | +----------------------------------------------------------------------+ 0 scm: scm is a new stage that processes REXX and C programs to line up + scm + ___ comments and to complete unclosed comments. It is the formatting engine of the SC XEDIT macro from the PRPQ (which is called SCM XEDIT in the ESA version of Pipes). + _____ 0 : spill: The spill stage added in CMS 12 implements XEDIT-like word spill + spill + _____ : and is documented in the author's help files. The word separator used : by spill can be defined as a single character or as a string. Case can : be ignored in searching for the word separator. When a record is split, : the word separator can be retained in the output or deleted. Continued : lines can be offset by a number of blanks or can have a string inserted : at the beginning. If a secondary output stream is defined, records (or : record segments) that cannot be split using the specified rules are : written to the secondary output rather than being split at the specified : length. 0 : This example splits its input records according to RFC822, preferably : splitting at a blank, but splitting at an at-sign or a period if : necessary, and starting continued lines with at least one blank: 0 : +----------------------------------------------------------------------+ : | | : | 'CALLPIPE (endchar ? name 822CONT)', /* Continue per RFC822: */ | : | '*: |', /* Input from pipeline. */ | : | 's: spill 79 offset 1 |', /* Spill at word boundary. */ | : | 'f: faninany |', /* Gather streams. */ | : | '*:', /* Output to pipeline. */ | : | '?', | : | 's: |', /* Spill at syntactic break. */ | : | 'spill 79 anyof /.@ / keep offset 1 |', | : | 'f:' /* Go hope for the best. */ | : | | : +----------------------------------------------------------------------+ 1 0 Pipe Dreams Page 65 ------------------------------------------------------------------------ 0 : The first spill does a word-spill to produce records that are no longer : than 79 characters. If there are no suitable blanks at which to split : the record (or the remainder of the record), this first spill writes : what is left of the record to its secondary output, where the second : spill splits it into one or more pieces, using a period, an at-sign, or : a blank as the word separator and retaining the word separator character : in the output records. 0 : sqlselect: sqlselect issues an SQL/DS query and converts the result to + sqlselect + _________ : printable format with a header line showing the names of the columns. : For CMS 12, sqlselect has been enhanced and "promoted" from being a : sample program to being a standard built-in program. 0 starmon: starmon is a new stage that connects to the CP *MONITOR system + starmon + _______ service, receives and deblocks the monitor data, and writes the individual records into the pipeline. starmon allows one to specify the segment name and to access the segment in either shared or exclusive mode. Other arguments allow one to specify that only events data or only sample data are to be captured or that certain monitor domains are to be suppressed. This example connects to the MONDCSS segment in shared mode and writes the monitor records to MONITOR FILE: 0 +----------------------------------------------------------------------+ | | | /* STARMON "SHARED|EXCLUSIVE|EVENTs|SAMPLE" */ | | /* "SUPPRESS " */ | | | | 'SEGMENT LOAD MONDCSS' /* Load the saved segment. */ | | 'CP MONITOR START' /* Start the monitor. */ | | | | 'PIPE', /* Capture monitor data: */ | | 'starmon mondcss shared |', /* Connect to *MONITOR. */ | | '> monitor file a' /* Write data to a file. */ | | | | 'CP MONITOR STOP' /* Stop the monitor. */ | | 'SEGMENT RELEASE MONDCSS' /* Release the segment. */ | | | +----------------------------------------------------------------------+ 0 This pipeline continues to run until halted with an HMONITOR immediate command or a PIPMOD STOP command. 0 starsys: starsys is also used to connect to a CP system service. It is + starsys + _______ similar to starmsg, but it uses a two-way protocol to communicate with the system service, rather than a one-way protocol. It is intended for use with any system service that sends a message and expects a reply, including *ACCOUNT, *LOGREC, and *SYMPTOM. This fragment is from the pipeline that runs in OPERACCT on our VM/ESA system: 1 0 Page 66 Pipe Dreams ------------------------------------------------------------------------ 0 +----------------------------------------------------------------------+ | | | 'CP RECORDING ACCOUNT ON LIMIT 100' | | | | 'PIPE (endchar ?) |', | | 'starsys *account |', /* Connect to *ACCOUNT (2-way).*/ | | 'nfind 'type5'|', /* Discard Type 5 records. */ | | 'nfind 'type8'|', /* Discard Type 8 records. */ | | 'nlocate 13.8 =V/SIE = |', /* Discard V/SIE records. */ | | . . . | | | +----------------------------------------------------------------------+ 0 The fact that starsys uses the two-way protocol to obtain data from system services means that a message is not deleted by CP until after that record has been consumed by the pipeline. Thus, if the pipeline processing, say, your accounting data fails, you have not lost the record that caused the failure or the records queued behind it. The importance of this was proved at Princeton the day we replaced starmsg with starsys in the pipeline that receives our accounting records from CP. Because of a combination of power and hardware failures, the system was run for four hours without the disk drive onto which our accounting is supposed to be written. Twice during that period, OPERACCT was autologged. The old pipeline would have consumed all the pending accounting records and would then have been unable to post them. The new pipeline nicely shut itself down without consuming a single record, leaving them all queued in memory until the disk drive became available. 0 suspend: suspend is a new pipeline command that suspends a stage to + suspend + _______ allow another stage to run. The following is a fragment from a pipeline stage that uses Arty Ecock's wonderful RXLDEV to build and communicate with logical devices. Before it puts the virtual machine into a wait state, this stage issues a suspend in case any of the other stages could run: 0 +----------------------------------------------------------------------+ | | | Signal Off Error | | 'SUSPEND' /* Give other stages a shot. */ | | Signal On Error | | logical_device = '' /* Wait for any logical device. */ | | rc = LDEV('Wait','logical_device','seconds') | | | +----------------------------------------------------------------------+ 0 timestamp: timestamp is a new stage that prefixes the records that pass + timestamp + _________ through it with up to 16 bytes of date and time information. 0 untab: untab expands '05'x tabs according to the tabstops specified in + untab + _____ . its argument string. If you use the Rice CMS Gopher client, I suggest . that you replace its expand stage with untab -8 (which sets a tabstop at . every eighth byte). untab is an order of magnitude faster. It can 1 0 Pipe Dreams Page 67 ------------------------------------------------------------------------ 0 . reduce total CPU utilization for viewing a large document by twenty . percent or more. 0 : verify: verify is a new selection filter in CMS 12 that is documented + verify + ______ : in the author's help files. It performs an operation much like that of : the REXX Verify() function; that is, it selects records that contain : only the specified characters or that contain none of the specified : characters. The field to be tested can be specified by a generalized : input range. Case can be ignored in testing for matches. This example : validates a Princeton class year specification, which must be either an : apostrophe followed by two digits or an asterisk followed by two digits: 0 : . . . | verify 1 /'*/ | verify 2.2 /0123456789/ | . . . 0 . xrange: xrange was added in CMS 11. It writes a single output record + xrange + ______ . containing a range of characters. If it is invoked with no arguments, . its output record contains 256 bytes that are the collating sequence . ('00'x through 'FF'x). One or two arguments are allowed, specifying the . range or the first and last values in the range. 0 . New and enhanced stages for accessing REXX variables + New and enhanced stages for accessing REXX variables + ____________________________________________________ 0 . CMS 11 added three new stages for accessing REXX variables, as well as . extensive enhancements to the four previously existing REXX "device . driver" stages, rexxvars, stem, var, and varload. Descriptions of the . new function can be found in the author's help files. The new stages . all support the main, producer, direct, and symbolic keywords added . recently to the older stages, and they all access "ancestor" REXX . environments in the same way that the older stages do. 0 . vardrop: vardrop is a new stage that drops REXX variables in the + vardrop + _______ . specified environment. The variables to be dropped are named in its . input records. After each variable has been dropped, the corresponding . input record is copied to the output. If a secondary output stream is . defined and a variable did not exist in the specified environment, the . corresponding record is copied to the secondary output rather than to . the primary output. 0 . varfetch: varfetch is a new stage that loads REXX variables into the + varfetch + ________ . pipeline. The variables to be loaded are named in its input records. . The named variables are retrieved from the specified environment and . written to the primary output stream. When the secondary output is . defined, the input record is copied to the secondary output if the . variable is not exposed (or does not exist). varfetch also has an . option toload, which causes its output records to include both the names . and the values of the variables in a format suitable for loading with . varload or varset. 1 0 Page 68 Pipe Dreams ------------------------------------------------------------------------ 0 . varset: varset is a new stage that sets variables from the contents of + varset + ______ . its input records. It is equivalent to varload with the symbolic . option, but also supports a secondary output stream. When the secondary . output is defined, the input record is copied to the secondary output if . the variable was not exposed (or did not exist) before its value was . set. 0 . varload: varload has been enhanced to allow the user to specify whether + varload + _______ . any of the input records are to be treated as comments and, if so, what . characters denote comment records. 0 . rexxvars: rexxvars has been enhanced to allow the user to increase the + rexxvars + ________ . buffer size, so that the values of variables are not truncated to 512 . bytes. rexxvars also now supports the toload option. 0 New stages for use in full-screen applications + New stages for use in full-screen applications + ______________________________________________ 0 Several other new stages have been added to CMS Pipelines to facilitate + _____________ the building of full-screen applications. Since these are rather specialized, I will simply list them briefly: 0 o aplencode processes lines to select the APL/TEXT character set for those characters that should map this way (as done by CP). 0 o apldecode reverses that process. 0 o brwctl is the control stage used by browse. 0 o fullscrq writes DIAGNOSE 24 and DIAGNOSE 8C responses into the pipeline. It works for a virtual machine console, a DIALed-in screen, or a TSO console. 0 . o fullscrc has been added in CMS 11 to obtain the same information as . fullscrq but by querying the physical device, rather than by querying . CP's cached information. 0 o fullscrs decodes the information provided by fullscrq or fullscrc. 0 o menuctl processes a HELPMENU file and display. 0 . o 3270qrep, which is new in CMS 11, generates the data stream to perform . a query for the device characteristics. 0 o 3277bfra converts a halfword to a 12-bit 3277 buffer address (six bits . per byte). The to16bit option added in CMS 11 does the reverse . mapping. 0 o 3277enc writes the 64-character encoding string used for the 3277 6-bit device code. 1 0 Pipe Dreams Page 69 ------------------------------------------------------------------------ 0 Pipeline termination enhancements + Pipeline termination enhancements + _________________________________ 0 Because several of the recent enhancements to Pipes have to do with + _____ pipeline termination, I think it might be appropriate to review that subject briefly. 0 In the simplest case, a pipeline consists of an input device driver, a few stages to manipulate the data, and an output device driver, and termination of such a pipeline is usually very straightforward. The input device driver reaches end-of-file on the device it is reading from and terminates. Its termination severs its output stream, causing the stage that was waiting to read from that stream to get an end-of-file on its input. That stage has nothing more to do, so it also terminates, causing its output stream to be severed and the next stage to receive end-of-file on its input. Very quickly, the entire pipeline collapses in domino fashion, with each stage getting a chance to do any end-of-file processing it may need to do before it terminates. 0 The first bit of complication that may arise is that one of the stages in the middle of the pipeline may decide to terminate before the input device driver has reached end-of-file. This might be, say, a take 5 stage. Once that stage has read five records and has copied them to its output stream, it terminates, causing both its input stream and its output stream to be severed. That causes the following stage to receive end-of-file on its input and to start the same domino termination effect as before. But it also causes the preceding stage to receive end-of-file the next time it tries to write an output record. 0 Many stages terminate "prematurely" when they receive end-of-file trying to write output. For example, if the stage is an input device driver, there is no point in its continuing to read from its device, if it has nowhere to write the records to, so it terminates. If the stage is a selection stage, such as find or locate, and it had only the one output stream connected, then it, too, will terminate when its output stream is severed. Again, it makes little sense to keep selecting records when no other stage is available to read the selected records. 0 So, when our take 5 terminates, end-of-file may propagate all the way back to the beginning of the pipeline, causing it to collapse as before, but with the dominoes falling from the middle toward both ends, rather than from beginning to end. 0 On the other hand, some stages do not terminate when they receive end-of-file on their output stream. For example, the host command processors, such as cp and cms, keep on issuing commands, and the output device drivers continue writing to their devices, even though they no longer have a connected output stream. They can still do useful work without being able to write to the pipeline, so there is no reason for them to stop. By the same logic, selection stages do not terminate when one output stream is severed, if the other output stream is still connected. 1 0 Page 70 Pipe Dreams ------------------------------------------------------------------------ 0 Thus, end-of-file may not propagate backwards in the pipeline through a branch or through certain stages. And this is certainly the way one would want things to work--segments of the pipeline that still have useful work to do keep on going, even though other segments may have completed. When their work is done, they, too, terminate. 0 How a stage behaves when its output is severed is well documented in the author's help files; each of the relevant stages has a section entitled "Premature Termination". A stage's behavior in this respect is generally the same as that of all the other stages in its class and is generally what one would expect. 0 There are, not surprisingly, a few cases in which what one expects may vary with the circumstances. That is the reason the new stop option was added to fanout to allow the user to specify the number of output streams that must reach end-of-file before fanout will terminate. The default is for it to continue running until all of its output streams have reached end-of-file (unless, of course, its input stream is severed), but now one can force it to terminate when, say, only one of its output streams has reached end-of-file. 0 Typically, the built-in stages terminate when their input streams are severed, so in general end-of-file easily propagates forward as the input stream of each stage is severed, even though the pipeline may branch. However, stages that have multiple input streams may or may not terminate when just one of their input streams is severed. synchronise, for example, terminates when any one of its streams is severed, but spec, on the other hand, continues running until all of its input streams are severed. For the most part, these behaviors are what one would wish. However, spec, too, has been enhanced to have a stop option to specify how many of its input streams must be at end-of-file before it will stop. 0 There is a set of input device drivers that never receive an end-of-file indication. These are the stages that take their input from external events or processes for which no end is defined. For example, immcmd accepts console commands and starmsg receives messages via IUCV. The delay stage, which waits for a timer expiration, is similar, although not an input device driver. All of the stages that wait on external events can be terminated by use of the PIPMOD STOP immediate command. PIPMOD STOP is not selective; it terminates every delay, immcmd, | starmon, starmsg, starsys, tcpclient, tcpdata, tcplisten, and udp stage in the entire pipeline set. This is reasonable, as often one wishes everything to terminate once any one of several possible events occurs. For example, this pipeline is used in an EXEC that wakes up twice a day to begin collecting CP monitor data but that also terminates when "stop" is typed on the virtual machine console: 1 0 Pipe Dreams Page 71 ------------------------------------------------------------------------ 0 +----------------------------------------------------------------------+ | | | 'PIPE (endchar ?)', /* Stop by timer or command: */ | | 'literal' totime '|', /* Define stopping time. */ | | 'delay |', /* Wait until that time. */ | | 'spec /Time expired./ 1 |', /* Change it to a message. */ | | 'f: faninany |', /* Will get one msg or t'other.*/ | | 'console |', /* Display reason for stopping.*/ | | 'take 1 |', /* Stop pipe after one event. */ | | 'var reason |', /* Save reason in variable. */ | | 'spec /PIPMOD STOP/ 1 |', /* Convert it into a command. */ | | 'command', /* Stop the non-ending stages. */ | | '?', | | 'immcmd stop|', /* Wait for immediate command. */ | | 'spec /Stopped./ 1 |', /* Change it to a message. */ | | 'f:' /* Go stop the pipeline. */ | | | | If reason = 'Stopped.' Then Exit /* Exit if we were told to. */ | | | +----------------------------------------------------------------------+ 0 When either delay or immcmd produces a record (because either the time interval has expired or a STOP command has been typed on the virtual machine console), then that record is first converted into a message that is stored in a REXX variable and is then converted into a PIPMOD STOP command that is issued to stop the non-ending delay and immcmd stages, so that the pipeline can terminate. 0 Another of the recent enhancements to Pipes is the addition of the + _____ pipestop stage, which can replace the spec /PIPMOD STOP/ and command stages in this example. As soon as pipestop receives an input record, it performs the same function as PIPMOD STOP; that is, it posts all of the ECBs that are being waited on by other stages. 0 However, if one applies a bit more "pipethink" to this problem, it becomes clear that neither PIPMOD STOP nor pipestop is actually required here. If the order of the console and take 1 stages is reversed, then end-of-file can propagate backwards through the faninany stage and terminate both delay and immcmd. (As a typical output device driver, console does not terminate when its output stream is severed, so end-of-file does not propagate backward through it.) 0 Some of the stages that can be stopped by PIPMOD STOP and pipestop can also be stopped by their own private immediate commands, which stop only the stage in question. For example, the HACCOUNT immediate command stops only a starsys *account stage. Using the immcmd stage, one has the ability to create other immediate commands to signal a single user-written stage to terminate. Armed with the appropriate immediate commands and the new gate stage, one can disassemble a complex pipeline much more gracefully than by using PIPMOD STOP. 1 0 Page 72 Pipe Dreams ------------------------------------------------------------------------ 0 In general, terminating a pipeline in an orderly fashion gets to be something one needs to think about only when the pipeline becomes rather complex and has more than one stage waiting on external events. Then it may become a satisfyingly intricate process, somewhat akin to designing fjords. For this reason, I was grateful for the addition of gate in CMS 10. 0 +----------------------------------------------------------------------+ | | | 'PIPE (endchar ?)', /* Using GATE to terminate: */ | | 'immcmd CMD|', /* Capture immediate commands. */ | | 'd: doimmcmd |', /* Analyze and process them. */ | | 'f: faninany |', /* Gather records for log file.*/ | | 'logger', /* Manage the log files. */ | | '?', | | 'd: |', /* DOIMMCMD's STOP signal. */ | | 'g: gate strict', /* Terminate when get signal. */ | | '?', | | 'starmsg |', /* Connect to CP *MSG service. */ | | 'u: nfind 00000001|', /* Divert user messages. */ | | 's: find 00000007|', /* Divert SCIF; keep IMSGes. */ | | 'doimsg |', /* Select interesting IMSGes. */ | | 'g: |', /* Run through gateway. */ | | 'elastic |', /* Buffer a few IMSG records. */ | | 'b: bitftp |', /* Run the FTP requests. */ | | 'f:', /* Write them to the log file. */ | | '?', | | 's: |', /* SCIF messages to here. */ | | 'elastic |', /* Buffering needed here, too. */ | | 'b:', /* Into BITFTP's 2ry input. */ | | '?', | | 'u: |', /* User messages to here. */ | | 'domsg |', /* Respond to user queries. */ | | 'f:' /* Write to the log file. */ | | | |----------------------------------------------------------------------| + + + | | | +------+ +-----+ +---+ +---+ | | |immcmd|--|doimm|--------------------------------------| |--| l | | | +------+ | cmd |----------+ +---+ | f | | o | | | +-----+ | +----+ | b | | a | | g | | | +----+ +-----+ +----+ +-|gate| +-------+ | i | | n | | g | | | |star|--|nfind|----|find|----| |--|elastic|--| t |--| i | | e | | | |msg | | MSG |-+ |IMSG|-+ +----+ +-------+ | f | | n | | r | | | +----+ +-----+ | +----+ | +-------+ | t | | a | +---+ | | | +----------|elastic|--| p | | n | | | | +-----+ +-------+ +---+ | y | | | +-|domsg|------------------------------| | | | +-----+ +---+ | | | +----------------------------------------------------------------------+ 1 0 Pipe Dreams Page 73 ------------------------------------------------------------------------ 0 The purpose of gate is to spread end-of-file in a delicate manner. gate terminates when it reads a record on its primary input stream; until then, records it reads on other streams are simply copied to the corresponding output stream. When the option strict is specified, gate checks the state of its primary input stream each time it is ready to copy a record to another output stream; when strict is omitted, gate stops the next time its primary input stream is selected. 0 The example shown on the preceding page illustrates the use of gate. This is the central routine of a service machine. It has two input stages, immcmd and starmsg. starmsg receives three kinds of messages: user messages, which are processed by the domsg stage, CP information messages, which form the primary input of the bitftp stage, and SCIF (secondary console interface) messages, which form the secondary input of the bitftp stage. One of the immediate commands that this pipeline accepts is STOP, but the familiar practice of turning that immediate command into a PIPMOD STOP command and issuing that to terminate the pipeline will not do in this case. The bitftp stage may be running a transaction that should be allowed to complete before the pipeline terminates. If a PIPMOD STOP command were issued, the starmsg stage would be terminated. That would mean that the bitftp stage could receive no more SCIF messages. It would, therefore, be unable to complete its transaction, which involves driving a process in another virtual machine. In fact, the bitftp stage would be unable to complete its transaction if starmsg, nfind, find, or the second elastic stage were terminated, because they form the path through which it receives the SCIF messages. Furthermore, it would be unable to log its transaction if the faninany or logger stages were terminated too soon. 0 However, by using gate, one can arrange things so that pulling the plug on this pipeline is no problem. When the doimmcmd stage receives a STOP command from the immcmd stage, it first writes a message to the logger stage (via the faninany stage), then sends a record to the gate stage on gate's primary input stream, and finally terminates. Its termination causes the immcmd stage to terminate, because (like most other input device drivers) immcmd terminates when its output stream has been severed. When gate receives the record from doimmcmd on its primary input stream, it also terminates, which severs its other input stream and its output stream. The severing of its output stream causes the elastic stage that had been connected to that stream also to terminate, severing its connection to the primary input of the bitftp stage. The fact that gate's secondary input stream has been severed does not affect the find stage, whose primary output was connected to that stream; find still has its secondary output connected, so it continues to run. Similarly, faninany continues to run despite the severing of its primary input stream when the doimmcmd stage terminated. 0 The remaining stages continue to run while bitftp finishes its transaction. It then reads from its primary input stream to get an IMSG describing its next transaction. When it discovers that its primary input has been severed, it issues a PIPMOD STOP command, which terminates the starmsg stage, starting a domino effect that results in 1 0 Page 74 Pipe Dreams ------------------------------------------------------------------------ 0 the termination of nfind, find, elastic, and domsg, but leaves faninany running, because it still has an input stream connected. bitftp then terminates itself, resulting in the termination of faninany and finally logger (but not until after logger has logged the final messages from all the other stages). 0 IX. SUPPORT FOR THE PERSONAL/370 AND PERSONAL/390 + IX. SUPPORT FOR THE PERSONAL/370 AND PERSONAL/390 + __________________________________________________ 0 : In addition to the awstape option added to block and deblock in CMS 12, . the author has written two pipeline stages specifically for the . Personal/370 or Personal/390 system, to take advantage of the interfaces . between the P/370 or P/390 and the associated OS/2 system. 0 . The os2file pipeline stage reads an OS/2 file into the pipeline or . creates an OS/2 file from the records in the pipeline. This example . reads the OS/2 CONFIG.SYS file, extracts the SET PATH command from that . file, formats the list of search path items into four columns, and . displays them on the CMS console: 0 . +----------------------------------------------------------------------+ . | | . | 'PIPE', | . | 'os2file c:\config.sys |', /* Read CONFIG.SYS. */ | . | 'find SET PATH|', /* Select SET PATH command. */ | . | 'change /SET PATH=// |', /* Remove command itself. */ | . | 'split at ; |', /* Split one item per line. */ | . | 'pad 20 |', /* Pad each to 20 bytes. */ | . | 'snake 4 |', /* Form into 4 columns. */ | . | 'literal |', /* Prepend blank line. */ | . | 'literal Search Path|', /* Prepend title line. */ | . | 'console' /* Display on console. */ | . | | . +----------------------------------------------------------------------+ 0 . The os2 pipeline stage issues OS/2 commands and captures the response in . the pipeline. This example issues an OS/2 DIR command to list the files . in the C:\P370 directory that have a filetype of DOC, transforms the . response from the DIR command into CMS EXEC PCOPY commands to copy those . files to the CMS A-disk, and then issues those PCOPY commands: 1 0 Pipe Dreams Page 75 ------------------------------------------------------------------------ 0 . +----------------------------------------------------------------------+ . | | . | 'PIPE', | . | 'literal dir c:\p370\*.doc |', /* OS/2 command to be issued.*/ | . | 'os/2 |', /* Issue it. */ | . | 'locate 1 |', /* Discard null lines. */ | . | 'nfind _|', /* Discard headers, trailers.*/ | . | 'spec', /* Build EXEC PCOPY commands:*/ | . | '=EXEC PCOPY C:\P370\= 1', /* literal, */ | . | 'word 1 next', /* filename, */ | . | '=.DOC= next', /* literal, */ | . | 'word 1 nextword', /* filename, */ | . | '=DOC A= nextword |', /* literal. */ | . | 'command |', /* Issue EXEC PCOPY commands.*/ | . | 'console' /* Display any response. */ | . | | . +----------------------------------------------------------------------+ 0 . These two new pipeline stages are included on the P/370 Starter System . Version 1.0 CD-ROM, which began being distributed at the beginning of . 1994. They are in a filter package named PIPLOCF MODULE, which is . automatically loaded as a nucleus extension if it is in the search order . when CMS Pipelines is initialized. + _____________ 1 0 Page 76 Runtime Library Distribution ------------------------------------------------------------------------ 0 Appendix A + Appendix A 0 CMS PIPELINES RUNTIME LIBRARY DISTRIBUTION + CMS PIPELINES RUNTIME LIBRARY DISTRIBUTION + __________________________________________ 0 This distribution of the CMS Pipelines runtime library has been made + _____________ available in response to SHARE requirement SOCMSX93003, which requested IBM's permission for software vendors (including shareware vendors) to package a current version of the CMS Pipelines runtime library with + _____________ their software products in order that they could use new features of "Pipes" in their products. + _____ 0 Questions concerning this distribution may be addressed to Melinda Varian (Melinda@Princeton.EDU). 0 Terms and Conditions + Terms and Conditions + ____________________ 0 IBM and Princeton University have signed a Software License Agreement (Number 07227), which contains the following provisions: 0 o 3.1 IBM hereby grants to Princeton a nonexclusive, royalty free, worldwide right to use, reproduce, copy, execute, display, perform, prepare and/or have prepared derivative works to the Licensed Works, to distribute internally and/or externally for any purpose whatsoever and to sublicense (royalty free only) and authorize others to do and/or exercise any, some or all of the foregoing, including the right to further sublicense others. The rights granted in this Section 3.1 are hereinafter called "Rights Granted". 0 o 4.0 Princeton acknowledges that title to the Licensed Works shall remain with IBM and that any copies of the Licensed Works or portions thereof made by Princeton in accordance with the Rights Granted hereunder shall include an IBM copyright notice thereon. The notice shall be affixed to all copies or portions thereof in such manner and location as to give reasonable notice of IBM's claim of copyright and shall be in conformity with all applicable regulations or affixation prescribed by the United States Register of Copyright. Princeton agrees to inform any licensee and/or sublicensee will conform to the requirements of this Section. 0 o Appendix A: Description of Licensed Works 0 Code Files: NXPIPE MODULE, PIPELINE HELPLIB Documentation: PIPELINE LIST3820, PIPELINE BOOK 1 0 Runtime Library Distribution Page 77 ------------------------------------------------------------------------ 0 Support + Support + _______ 0 There is no official support for the code and documentation included in + no this distribution. If you experience a problem with the code or documentation, please report it on one of the Pipelines fora: + _________ 0 o PROB PIPELINE on VMSHARE o PIPELINE CFORUM on IBMLink o CMSPIP-L LISTSERV list 0 You will likely receive assistance, but there are no guarantees. 1 0 Page 78 Co-Pipe Application with Two Fittings ------------------------------------------------------------------------ 0 Appendix B + Appendix B 0 A SIMPLE CO-PIPE APPLICATION WITH TWO FITTINGS + A SIMPLE CO-PIPE APPLICATION WITH TWO FITTINGS + ______________________________________________ 0 TITLE 'COPIPCPY -- Trivial Co-Pipe Application' SPACE 3 *---------------------------------------------------------------------* * This program runs a co-pipe with two "fitting" stages, reading * * from the left-hand pipeline and copying the records produced by * * that pipeline to the right-hand pipeline. * *---------------------------------------------------------------------* SPACE 3 COPIPCPY START 0 COPIPCPY AMODE ANY COPIPCPY RMODE ANY EJECT , *---------------------------------------------------------------------* * Slightly modified PIPRESUM macro * *---------------------------------------------------------------------* MACRO , Modified not to require pl/j. *MWV* .* Resume the co-pipe to do some more work. FITLV2 PIPRESUM &PLIST,&TERMINATE=NO,&RPL=,&WAITFOR=, *MWV* + &LISTPTR=PIPPLIST *MWV* AIF (D'PIPFTPRM).HAVEIT FITLV2 PIPFTPRM TYPE=DSECT FITLV2 &SYSLOC LOCTR , FITLV2 .HAVEIT ANOP , FITLV2 &P SETC 'PIPFTPRM' FITLV2 PIPBLDPL &LISTPTR,&PLIST Build a PLIST. *MWV* L 15,0(,1) Get parameter FITLV2 AIF (K'&RPL EQ 0).NORPL1 FITLV2 #PIPLA 14,&RPL Address RPL FITLV2 PUSH USING FITLV2 DROP , FITLV2 USING &P.,15 FITLV2 ST 14,&P.RPL Set RPL address FITLV2 POP USING FITLV2 .NORPL1 ANOP , FITLV2 AIF (K'&WAITFOR EQ 0).NOWAIT1 FITLV2 #PIPLA 14,&WAITFOR FITLV2 PUSH USING FITLV2 DROP , FITLV2 USING &P.,15 FITLV2 ST 14,&P.WAITFOR CHRALNUM POP USING FITLV2 .NOWAIT1 ANOP , FITLV2 PUSH USING FITLV2 DROP , FITLV2 USING &P.,15 FITLV2 &B1 SETB ('&TERMINATE' EQ 'YES') FITLV2 &B2 SETB (K'&WAITFOR GT 0) FITLV2 1 0 Co-Pipe Application with Two Fittings Page 79 ------------------------------------------------------------------------ 0 MVI &P.REQTYPE,&B1*&P.TERMINATE+&B2*&P.WAITRPL Set * flags FITLV2 L 15,&P.RESUME Get resume entry point CHRALNUM BALR 14,15 Go there CHRALNUM POP USING FITLV2 MEND , FITLV2 *---------------------------------------------------------------------* EJECT , PRINT NOGEN REGEQU PIPFTRPL TYPE=DSECT Define canonical names. PRINT GEN PIPFTPRM TYPE=DSECT,PREFIX=PRM Define shorter names. PIPFTRPL TYPE=DSECT,PREFIX=RPL Define shorter names. EJECT , COPIPCPY CSECT , STM R14,R12,12(R13) Save caller's registers. LR R10,R15 Establish base register. USING COPIPCPY,R10 LA R15,REGSAVE Establish savearea chain. ST R15,8(,R13) ST R13,4(,R15) LR R13,R15 * LA R7,COMMAREA Base co-pipe communications area. USING PRM,R7 LA R8,LEFTRPL Base RPL for left-side fitting. USING RPL,R8 LA R9,RIGHTRPL Base RPL for right-side fitting. SPACE 1 *---------------------------------------------------------------------* * Set up the co-pipe: * *---------------------------------------------------------------------* CMSCALL CALLTYP=PROGRAM,PLIST=BOTH,COPY=NO Set up co-pipe. LTR R15,R15 Did it fail? BNZ ERROUT Yes, exit. SPACE 1 *---------------------------------------------------------------------* * Loop running the co-pipe until all records are read and written: * *---------------------------------------------------------------------* LOOP CLI RPLSTATE,RPLRDONE Did READ complete? BNE EXIT No, we're done; exit. MVC RPLBUFFER-RPL(8,R9),RPLBUFFER Record address & length. MVI RPLSTATE-RPL(R9),RPLWRITE Set WRITE request. PIPRESUM COMMAREA Resume the co-pipe. CLI PRMSTATUS,PRMREADY Normal completion? BNE EXIT No, we're done; exit. * CLI RPLSTATE-RPL(R9),RPLWDONE WRITE complete? BNE EXIT No, we're done; exit. MVI RPLSTATE,RPLREAD Consume record and read another. PIPRESUM COMMAREA CLI PRMSTATUS,PRMREADY Normal completion? 1 0 Page 80 Co-Pipe Application with Two Fittings ------------------------------------------------------------------------ 0 BE LOOP Yes, go process the record. SPACE 1 *---------------------------------------------------------------------* * Terminate the co-pipe and exit: * *---------------------------------------------------------------------* EXIT CLI PRMSTATUS,PRMDONE Has co-pipe ended? BE RETURN Yes, can leave now. PIPRESUM COMMAREA,TERMINATE=YES Terminate co-pipe. * RETURN L R13,4(,R13) Restore registers and return. L R14,12(,R13) L R15,RETCODE Set the return code. LM R0,R12,20(R13) BR R14 * ERROUT ST R15,RETCODE Set the return code. B RETURN Exit. EJECT , *---------------------------------------------------------------------* * Constants and workareas: * *---------------------------------------------------------------------* DS 0D REGSAVE DC 18F'0' Register savearea. PIPPLIST DC F'0' Parameter list for PIPRESUM. RETCODE DC F'0' Default return code. SPACE 3 *---------------------------------------------------------------------* * Define the parameter token list for running a co-pipe pipeline; * * define the co-pipe communications area; and define the request * * parameter lists (RPLs) for the two "fitting" stages. * *---------------------------------------------------------------------* SPACE 1 PIPTPARM BOTH,CMS=YES, Generate parameter token list. + (fitg,COMMAREA), Specify communications area. + (pipe,PIPELINE,PIPEND-PIPELINE) Specify the pipeline. * PIPELINE DC C'(endchar ?)' Pipeline with two fittings. DC C'< profile exec |' Read file from minidisk. DC C'fitting left' Into this program. DC C'?' DC C'fitting right |' From this program. DC C'console' Onto the console. PIPEND EQU * * PIPFTPRM COMMAREA,TYPE=CSECT,RPL=LEFTRPL Co-pipe comm. area. * PIPFTRPL LEFTRPL,TYPE=CSECT,FITTING='left',START=READ, + NEXT=RIGHTRPL PIPFTRPL RIGHTRPL,TYPE=CSECT,FITTING='right',START=IDLE EJECT , LTORG , END COPIPCPY 1 0 Using an Encoded Pipeline Specification Page 81 ------------------------------------------------------------------------ 0 Appendix C + Appendix C 0 A SIMPLE EXAMPLE OF USING AN ENCODED PIPELINE + A SIMPLE EXAMPLE OF USING AN ENCODED PIPELINE + _____________________________________________ SPECIFICATION + SPECIFICATION + _____________ 0 TITLE 'COPIPDSK -- Read Co-Pipe from Disk and Run It' SPACE 3 *---------------------------------------------------------------------* * This program reads a pipeline specification from the file FITTING * * FILE and executes that pipeline as a co-pipe, reading and counting * * the records that it produces. * * * * This program appends a "fitting" stage to the user's pipeline * * specification. Thus, the pipeline must be one that produces its * * output in the last stage that the user specified. * *---------------------------------------------------------------------* SPACE 3 COPIPDSK START 0 COPIPDSK AMODE ANY COPIPDSK RMODE ANY EJECT , *---------------------------------------------------------------------* * Slightly modified PIPRESUM macro * *---------------------------------------------------------------------* MACRO , Modified not to require pl/j. *MWV* .* Resume the co-pipe to do some more work. FITLV2 PIPRESUM &PLIST,&TERMINATE=NO,&RPL=,&WAITFOR=, *MWV* + &LISTPTR=PIPPLIST *MWV* AIF (D'PIPFTPRM).HAVEIT FITLV2 PIPFTPRM TYPE=DSECT FITLV2 &SYSLOC LOCTR , FITLV2 .HAVEIT ANOP , FITLV2 &P SETC 'PIPFTPRM' FITLV2 PIPBLDPL &LISTPTR,&PLIST Build a PLIST. *MWV* L 15,0(,1) Get parameter FITLV2 AIF (K'&RPL EQ 0).NORPL1 FITLV2 #PIPLA 14,&RPL Address RPL FITLV2 PUSH USING FITLV2 DROP , FITLV2 USING &P.,15 FITLV2 ST 14,&P.RPL Set RPL address FITLV2 POP USING FITLV2 .NORPL1 ANOP , FITLV2 AIF (K'&WAITFOR EQ 0).NOWAIT1 FITLV2 #PIPLA 14,&WAITFOR FITLV2 PUSH USING FITLV2 DROP , FITLV2 USING &P.,15 FITLV2 ST 14,&P.WAITFOR CHRALNUM POP USING FITLV2 .NOWAIT1 ANOP , FITLV2 1 0 Page 82 Using an Encoded Pipeline Specification ------------------------------------------------------------------------ 0 PUSH USING FITLV2 DROP , FITLV2 USING &P.,15 FITLV2 &B1 SETB ('&TERMINATE' EQ 'YES') FITLV2 &B2 SETB (K'&WAITFOR GT 0) FITLV2 MVI &P.REQTYPE,&B1*&P.TERMINATE+&B2*&P.WAITRPL Set * flags FITLV2 L 15,&P.RESUME Get resume entry point CHRALNUM BALR 14,15 Go there CHRALNUM POP USING FITLV2 MEND , FITLV2 *---------------------------------------------------------------------* EJECT , PRINT NOGEN FSCBD , File System Control Block DSECT. FSTD , File Status Table DSECT. REGEQU DMSPDEFS , To get magic definitions. PIPFTRPL TYPE=DSECT Define canonical names. PRINT GEN PIPFTPRM TYPE=DSECT,PREFIX=PRM Define shorter names. PIPFTRPL TYPE=DSECT,PREFIX=RPL Define shorter names. EJECT , COPIPDSK CSECT , STM R14,R12,12(R13) Save caller's registers. LR R10,R15 Establish base register. USING COPIPDSK,R10 LA R15,REGSAVE Establish savearea chain. ST R15,8(,R13) ST R13,4(,R15) LR R13,R15 * SR R4,R4 Initialize record counter. LA R7,COMMAREA Base co-pipe communications area. USING PRM,R7 LA R8,RPL1 Base request parameter list (RPL). USING RPL,R8 SPACE 1 *---------------------------------------------------------------------* * Determine size of file containing user's pipe specification: * *---------------------------------------------------------------------* FSSTATE FSCB=FITFSCB,ERROR=RETURN,FORM=E Get attributes. USING FSTD,R1 L R2,FSTAIC Alternate item count. L R1,FSTLRECL Maximum record length. DROP R1 MR R0,R2 Maximum possible byte count. LA R1,7+L'FITSTAGE-1(R1) Plus space for "fitting" stage. N R1,FN8 Round to doubleword. SPACE 1 *---------------------------------------------------------------------* * Obtain the storage needed for the co-pipe pipe specification: * *---------------------------------------------------------------------* 1 0 Using an Encoded Pipeline Specification Page 83 ------------------------------------------------------------------------ 0 CMSSTOR OBTAIN,BYTES=(R1) Request the storage. LTR R15,R15 BNZ ERROUT Exit if storage is not available. STM R0,R1,STORSAVE Remember where it is. SPACE 1 *---------------------------------------------------------------------* * Store address and size of pipeline in area referenced by the * * "pipe" parameter token: * *---------------------------------------------------------------------* SPACE 1 ST R1,APIPE Store address of pipeline spec. ST R0,LPIPE Store length of pipeline spec. SPACE 1 *---------------------------------------------------------------------* * Store address and size as arguments for "storage" stage and size * * as argument for "pad" stage: * *---------------------------------------------------------------------* CVD R0,DWD Length for pipeline specification. UNPK STORARGS+9(3),DWD+6(2) Unpack 3 rightmost. OI STORARGS+11,C'0' Fix sign. MVC PADARGS,STORARGS+9 Copy to "pad" argument. * ST R1,FWD Address for pipeline specification. UNPK DWD(9),FWD(5) NC DWD,=8X'0F' TR DWD,=C'0123456789abcdef' MVC STORARGS(8),DWD Fill in hex address for "storage". SPACE 1 *---------------------------------------------------------------------* * Run the encoded pipeline to read the co-pipe into memory: * *---------------------------------------------------------------------* CMSCALL CALLTYP=PROGRAM,PLIST=GETF,COPY=NO Run encoded pipe. LTR R15,R15 Did it fail? BNZ ERROUT Yes, exit. SPACE 1 *---------------------------------------------------------------------* * Set up the co-pipe: * *---------------------------------------------------------------------* CMSCALL CALLTYP=PROGRAM,PLIST=COPIPE,COPY=NO Set up co-pipe. LTR R15,R15 Did it fail? BNZ ERROUT Yes, exit. SPACE 1 *---------------------------------------------------------------------* * Loop running the co-pipe until all records are read: * *---------------------------------------------------------------------* LOOP CLI RPLSTATE,RPLRDONE Did READ complete? BNE EXIT No, we're done; exit. LA R4,1(R4) Increment counter. MVI RPLSTATE,RPLREAD Consume that and read another. PIPRESUM COMMAREA CLI PRMSTATUS,PRMREADY Normal completion? BE LOOP Yes, go process the record. SPACE 1 1 0 Page 84 Using an Encoded Pipeline Specification ------------------------------------------------------------------------ 0 *---------------------------------------------------------------------* * Terminate the co-pipe, display the record count, free gotten * * storage, and exit: * *---------------------------------------------------------------------* EXIT CLI PRMSTATUS,PRMDONE Has co-pipe ended? BE RETURN Yes, can leave now. PIPRESUM COMMAREA,TERMINATE=YES Terminate co-pipe. * RETURN CVD R4,DWD Format record count. UNPK COUNTCNT,DWD+5(3) Unpack 5 rightmost. OI COUNTCNT+4,C'0' Fix sign. APPLMSG TEXTA=COUNTMSG Issue message. * RELSTORE LM R2,R3,STORSAVE Length and address of memory. LTR R3,R3 Did we get any? BZ RESTORE No, branch. CMSSTOR RELEASE,BYTES=(R2),ADDR=(R3) Yes, release it. * RESTORE L R13,4(,R13) Restore registers and return. L R14,12(,R13) L R15,RETCODE Set the return code. LM R0,R12,20(R13) BR R14 * ERROUT ST R15,RETCODE Set the return code. B RELSTORE Release storage and exit. EJECT , *---------------------------------------------------------------------* * Constants and workareas: * *---------------------------------------------------------------------* DS 0D PIPPLIST DC F'0' Parameter list for PIPRESUM. * FWD DC F'0' Workarea. DWD DC D'0' Workarea. DC D'0' Workarea overflow. * REGSAVE DC 18F'0' Register savearea. STORSAVE DC 2F'0' Save pointer to gotten memory. RETCODE DC F'0' Default return code. FN8 DC F'-8' For rounding to doubleword. * FITFSCB FSCB 'FITTING FILE *',FORM=E File containing user's pipe. * COUNTMSG DC AL1(CNTEND) Message issued at termination. COUNTCNT DS CL5 DC C' records read from co-pipe.' CNTEND EQU *-COUNTMSG-1 SPACE 3 *---------------------------------------------------------------------* * Define encoded pipeline specification for reading the user's pipe * * from disk and define the parameter token list for running this * * encoded pipeline specification: * 1 0 Using an Encoded Pipeline Specification Page 85 ------------------------------------------------------------------------ 0 *---------------------------------------------------------------------* SPACE 1 * 'PIPE', * '< fitting file |', Read user's pipeline specification. * 'append literal || fitting in1 |', Append a "fitting" stage. * 'join * |', Join into single piece. * 'pad nnn |', Pad to length set in "pipe" token. * 'storage xxxxxxxx nnn e0' Write into gotten storage. SPACE 1 PIPSCBLK ENCODED,TYPE=RUNPIPE PIPSCSTG TYPE=BEGIN PIPSCSTG TYPE=STAGE,VERB='<',ARGS=(FITFSCB+FSCBFN-FSCBD,18) PIPSCSTG TYPE=STAGE,VERB='append',ARGS=(APPARGS,APPLEN) PIPSCSTG TYPE=STAGE,VERB='join',ARGS='*' PIPSCSTG TYPE=STAGE,VERB='pad',ARGS=PADARGS PIPSCSTG TYPE=STAGE,VERB='storage',ARGS=STORARGS PIPSCSTG TYPE=DONE * PADARGS DC C'nnn' Length of gotten storage. STORARGS DC C'xxxxxxxx nnn e0' Address & length of gotten storage. * APPARGS DC C'literal ' FITSTAGE DC C'|| fitting in1' To append to pipe read from file. APPLEN EQU *-APPARGS * PIPTPARM GETF,CMS=YES, Generate parameter token list. + (encd,ENCODED) Specify the encoded pipeline. SPACE 3 *---------------------------------------------------------------------* * Define the parameter token list for running the user's pipeline * * as a co-pipe; define the co-pipe communications area; define the * * request parameter list (RPL) for the "fitting" stage: * *---------------------------------------------------------------------* SPACE 1 DROP , All registers. USING COPIPE,R1 R1 is known by pipeline. PIPTPARM COPIPE,CMS=YES, Generate parameter token list. + (fitg,COMMAREA), Specify communications area. + (pipe,%APIPE,%LPIPE) Specify the pipeline. DROP , SPACE 1 LPIPE DS F Length of pipeline. APIPE DS F Address of pipeline. * PIPFTPRM COMMAREA,TYPE=CSECT,RPL=RPL1 Co-pipe comm. area. * PIPFTRPL RPL1,TYPE=CSECT,FITTING='in1',START=READ EJECT , LTORG , END COPIPDSK 1 0 Page 86 Feedback Loop with "lookup" ------------------------------------------------------------------------ 0 Appendix D + Appendix D 0 EXAMPLE OF A FEEDBACK LOOP WITH "LOOKUP" + EXAMPLE OF A FEEDBACK LOOP WITH "LOOKUP" + ________________________________________ 0 Recently on the CMSPIP-L list, a question was raised about using the new streams on the lookup stage to add and delete master records while lookup is running. The specific question involved a service machine that was to receive ADD, CHANGE, DELETE, and QUERY commands from users to process against a master file. The master file needed to be kept current on disk, so that the information would be safe if the system crashed, but it also needed to be kept current in the master file of a lookup stage in order to provide good response time to interactive commands. 0 The records that lookup received on its tertiary input stream (master file adds) and its quaternary input stream (master file deletes) would be derived from records it had written to its output streams. For example, a CHANGE record would first be fed into the primary input of lookup. If a match was found in the master file, the CHANGE record would then be read into the quaternary input stream to cause the existing record to be deleted, after which it would be fed into the tertiary input stream to cause the new record to be added to the master file. In other words, the pipeline would contain a feedback loop. 0 Pipelines with feedback loops are extremely powerful, but coding one requires a rather sophisticated knowledge of how records flow through a pipeline. And this case is further complicated by the fact that lookup terminates only when all of its input streams have gone to end-of-file + ___ (or all of its output streams have been severed). This means that when one is feeding its output back into its input, one must find a way to close off all of the inputs when the primary input has gone to end-of-file. 0 Jeff Gribbin, of EDS in the UK, provided a well-thought-out solution to this problem, which he has graciously permitted me to reproduce (see below). Two of the idioms that Jeff uses in his example may be unfamiliar: 0 1. not is used to run a stage with its output streams reversed. That is, the records that would normally be written to the stage's secondary output are written to its primary output, and vice versa. + __________ So, when Jeff uses Data: Not Count Lines, the lines being counted go to the secondary output of count, rather than to the primary output, and the count of the lines goes to the primary output. 0 Similary, when he uses Not Chop AFTER X00 (with no secondary stream defined), the part of the record up to '00'x, which would normally have been written to the primary output, goes instead to the undefined secondary (and is thus discarded). The portion of the record after '00'x, which is the only part he is interested in, gets written to the primary output. 1 0 Feedback Loop with "lookup" Page 87 ------------------------------------------------------------------------ 0 2. gate is used to shut down a pipeline (or portions of a pipeline) delicately. The concept is very simple. gate can have as many streams as you like. When it sees an input record on streams 1-n, it + _ simply copies that record to the matching output stream. When it sees an input record on stream 0, however, it terminates. 0 That is all it does, but if one has used it properly, the termination of gate will start end-of-file propagating forward and backward through a complex pipeline. One codes references to gate at strategic points in all of the pipeline segments that one wishes to be able to terminate. So, in Jeff's example, the secondary streams of gate flow through the label Gate: in this segment: 0 +-----------------------------------------------------------------------+ | | | "?", | | " LkAdd: Faninany", /* Dynamic additions to the master file */ | | "| Gate:", /* Guarantee eof on all Lookup streams */ | | "| Look:", /* Send them to Lookup's Tertiary Input */ | | "?", | | | +-----------------------------------------------------------------------+ 0 That is, data flow from that faninany into the tertiary input of the lookup through the secondary input and output of gate. Thus, when gate terminates, faninany sees end-of-file on its output and also terminates. And lookup sees end-of-file on its tertiary input, which is one of the requirements for it to terminate. 0 Jeff also uses tertiary streams for his gate, so its termination also spreads end-of-file to another segment of the pipeline, the one that includes the quaternary input to lookup. 0 The primary input to gate gets a record only after lookup has read all of the records on its primary input, so gate is triggered to terminate when it is time for lookup to be shut down. That is, lookup has read the last record from its primary input stream, so that stream is severed. The termination of gate causes the other two input streams for lookup to be severed, thus meeting all of the requirements for lookup itself to terminate, at which point the pipeline quietly winds down. 0 The other interesting piece of strategy in this example is that the requested transactions are not applied to the master file on disk in real time. Instead, records describing the transactions are appended to the end of that file as they arrive. When the server is started up, it reads the entire master file, always retaining the last record for each key and deleting any records for which a deletion request is found. After this clean-up, it writes the file back to disk as well as using it as the master file for lookup. This strategy greatly simplifies the maintenance of the disk file. 1 0 Page 88 Feedback Loop with "lookup" ------------------------------------------------------------------------ 0 Here is how one might run Jeff's filter (which is called SAMPLE REXX): 0 pipe console | sample | console MAINT ADD Data for MAINT MAINT QUERY 0 And here is SAMPLE REXX: 0 /*********************************************************************/ /* Sample of multi-stream Lookup - */ /* Untested - purely for use as a conversation-piece. */ /* Jeff Gribbin May 1996 */ /*********************************************************************/ Signal ON NOVALUE /* (Just a habit of mine) */ /*-------------------------------------------------------------------*/ /* Input-stream is:- */ /* */ /* Output-stream is:- */ /* Either 0 */ /* -or- 1 */ /* "0" means that the command failed, "1" means that it worked */ /*-------------------------------------------------------------------*/ Pipe = "" /* Let's build a looooong pipe */ Pipe = Pipe ||, "CALLPIPE (Name Sample End ?)", /* Use REXX PDCALL for PIPEDEMO */ " *.Input.0:", /* Get the command-stream */ "| Data: Not Count Lines", , /*------------------------------------------------------------*/ , /* "Lookup" doesn't terminate until ALL its inputs go to */ , /* end-of-file. The way I've chosen to generate end-of-file */ , /* on all the inputs is to put them through a gate which is */ , /* closed after the last record has been consumed. The way I */ , /* close the gate is to use "Count Lines" which will output a */ , /* "Count" record when _its_ input goes to end-of-file. I use */ , /* "Not" simply to reverse the output streams from "Count" */ , /* because it's more convenient to code the gate at the start */ , /* of the pipeline specification. */ , /* */ , /* Note that "Count" is especially appropriate for this kind */ , /* of job because it ALWAYS outputs a record - even if there */ , /* are NO input records ... */ , /*------------------------------------------------------------*/ "| Gate: Gate", /* This gate will close when it sees the count */ "?" Pipe = Pipe ||, , /*------------------------------------------------------------*/ , /* This pipeline updates the disk control file that's used to */ , /* prime the Lookup "Master" table at startup time. Each */ , /* record is first appended to DATA FILE A using "Fileslow" */ , /* and then the record is used to generate a "FINIS" CMS */ , /* command which closes the file. This guarantees that each */ 1 0 Feedback Loop with "lookup" Page 89 ------------------------------------------------------------------------ 0 , /* change is "committed" to disk as soon as it happens. This */ , /* is a trade-off between performance and certainty - I am */ , /* assuming that the rate of change is relatively low ... */ , /*------------------------------------------------------------*/ " Fileupd: Faninany", /* Disk-file updates */ "| Fileslow DATA FILE A", /* Append the new record */ "| Spec /FINIS DATA FILE A/ 1", "| Command", /* Commit the change to disk */ "?" Pipe = Pipe ||, , /*------------------------------------------------------------*/ , /* This is where the Primary Input Data Stream gets */ , /* processed ... */ , /*------------------------------------------------------------*/ " Data:", /* The "counted" data stream ... */ "| Spec Word 1 1 Words 2-* 10", /* Ensure userid is in cols 1-8 */ , /*------------------------------------------------------------*/ , /* The following "Fanout" is critical to the success of this */ , /* pipeline. It is essential to ensure that the output from */ , /* the Lookup stage (either Primary or Secondary) is consumed */ , /* immediately; otherwise, the Lookup stage will be */ , /* non-dispatchable when we want to update the Master Table, */ , /* and the pipeline will stall. HOWEVER, we DON'T want Lookup */ , /* to have access to another Primary record until we've */ , /* completely processed the "current" record. "Fanout" */ , /* achieves this for us by requiring that each copy of its */ , /* output record be consumed before it outputs the next copy */ , /* and by not reading a new record until the last copy has */ , /* been consumed. (This also has the added benefit that this */ , /* sample pipe does not itself delay records.) */ , /*------------------------------------------------------------*/ "| Blocker: Fanout", "| Look: Lookup 1-8 Master", /* Do we know this user? */ "| Change //1/", /* Yes - Prefix Master Record with "FOUND" flag */ "| Checked: Faninany", /* (Bring in "NOT FOUND" stuff as well) */ , /*------------------------------------------------------------*/ , /* Here's another "key" moment; we've just brought together */ , /* the output from Lookup - either a "Master" record from the */ , /* Primary Output if the key was found, or a "Flag" record */ , /* from the Secondary Output if the key was not found - and */ , /* we're going to feed this (through a "Faninany") into a */ , /* "Join 1". "Join" will immediately consume this record, */ , /* which will cause "Lookup" to consume _its_ primary input */ , /* which will cause the "Fanout" at "Blocker:" to output the */ , /* second copy of _its_ input, which is also passed (via the */ , /* "Faninany" at "Detail:") to the "Join". "Join" now has */ , /* sufficient data to create an output record, so it does. */ , /* HOWEVER, it DOES NOT consume its latest input record until */ , /* its OUTPUT record is consumed, so (provided we don't use */ , /* any stages that delay records), we now have a "Lookup" */ , /* that is dispatchable (and can therefore accept Table */ , /* Updates on its Tertiary and Quaternary input) but which */ , /* WON'T yet see any data on its Primary Input (phew!). */ 1 0 Page 90 Feedback Loop with "lookup" ------------------------------------------------------------------------ 0 , /*------------------------------------------------------------*/ "| Detail: Faninany", /* (A copy of the detail record) */ "| Join 1 X00", /* Append "Detail" to "Master" */ , /*------------------------------------------------------------*/ , /* We've now got either:- */ , /* <1> */ , /* or:- */ , /* <0> */ , /* */ , /* This is awkward to process, so we use a "Spec" stage to */ , /* swap things around a bit ... */ , /*------------------------------------------------------------*/ "| Spec Fieldsep 00", "1 1", /* FOUND/NOT FOUND flag stays in Column 1 */ "Field 2 2", /* "Detail" now starts in Column 2 */ "X00 Next", /* We'll still need a field-separator */ "Field 1 Next", /* "Master" is now the second field */ , /*------------------------------------------------------------*/ , /* So, now we've got either:- */ , /* <1> <1> */ , /* or:- */ , /* <0> <0> */ , /*------------------------------------------------------------*/ "| Query: NLocate (Word 2) /QUERY/", /* Filter out QUERY rqs */ "| Chop X00", /* Only "QUERY" wants old Master-file info */ "| Change: NLocate (Word 2) /CHANGE/", /* Filter out CHANGE rqs */ "| Add: NLocate (Word 2) /ADD/", /* Filter out ADD rqs */ "| Delete: NLocate (Word 2) /DELETE/", /* Filter out DELETE rqs */ , /*------------------------------------------------------------*/ , /* Command not recognised. Format an appropriate error record */ , /* and pass it to our output stream ... */ , /*------------------------------------------------------------*/ "| Spec /0/ 1", /* Flag next stage that we had a problem */ "2.8 3", /* The issuing userid */ "Word 2 12", /* The supplied command */ "/Command not recognised./ Nextword", /* The message */ , /*------------------------------------------------------------*/ , /* All data for our output stream ends up here. When the data */ , /* is consumed by whatever we're connected to, all our stages */ , /* back to the "Fanout" at "Blocker:" will in turn consume */ , /* _their_ input, allowing "Blocker:" to consume our Primary */ , /* Input and then see if there's any more work to do ... */ , /*------------------------------------------------------------*/ "| Outnext: Faninany", /* (The gateway to the next stage) */ "| *.Output.0:", /* Pass our results to the next stage */ "?" Pipe = Pipe ||, , /*------------------------------------------------------------*/ , /* A small, but vital, pipe, whose purpose and operation is */ , /* explained above ... */ , /*------------------------------------------------------------*/ " Blocker:", "| Detail:", 1 0 Feedback Loop with "lookup" Page 91 ------------------------------------------------------------------------ 0 "?" Pipe = Pipe ||, , /*------------------------------------------------------------*/ , /* This is the pipe that feeds the initial data into Lookup's */ , /* "Master Table"; it does this by reading DATA FILE A. This */ , /* file has records in the following format:- */ , /* Either:- */ , /* Cols 01-08 Userid */ , /* 09 Blank */ , /* 10-* Data */ , /* or:- */ , /* Cols 01-08 Userid */ , /* 09 "D" (Which means that the record is deleted) */ , /* */ , /* Before "Feeding the Lookup", this pipe tidies up the disk */ , /* file by throwing away all records except the last for any */ , /* particular userid, and even throwing _that_ record away if */ , /* it contains a "D" in Column 9. */ , /*------------------------------------------------------------*/ " Literal DATA FILE A" ||, /* "Our" fileid ... */ "| State Nodetails Quiet", /* ... see if it exists ... */ "| Getfiles", /* ... if it does, read it - if not, no problem */ "| Sort 1-8", /* Sort by userid */ "| Unique 1-8 Last", /* Take only the last of each userid */ "| Nlocate 9.1 /D/", /* Throw these ones away as well */ "| > DATA FILE A", /* Refresh the on-disk file */ , /*------------------------------------------------------------*/ , /* Note that "Lookup" reads ALL its secondary input before it */ , /* writes ANY output. We shall therefore have completely */ , /* rewritten AND CLOSED "DATA FILE A" in this pipe BEFORE we */ , /* attempt to make any updates to it. I.E., we're "safe". */ , /*------------------------------------------------------------*/ "| Look:", /* Feed it into Lookup's "startup" stream */ , /*------------------------------------------------------------*/ , /* Lookup's secondary output is "Detail" records which don't */ , /* have a matching key in the "Master" table. In this case, */ , /* all we're interested in is the fact that the record wasn't */ , /* found, so we change the record to a simple "Not found" */ , /* flag ... */ , /*------------------------------------------------------------*/ "| Spec /0/ 1", /* Flag as "NOT FOUND" */ "| Checked:", /* Rejoin the main stream */ "?" Pipe = Pipe ||, , /*------------------------------------------------------------*/ , /* This is the pipe that feeds Lookup's Tertiary Input, which */ , /* is used to make dynamic additions to the Master Table. */ , /* Lookup will consume these records immediately it receives */ , /* them. */ , /*------------------------------------------------------------*/ " LkAdd: Faninany", /* Dynamic additions to the master file */ "| Gate:", /* Guarantee eof on all Lookup streams */ "| Look:", /* Send them to Lookup's Tertiary Input */ 1 0 Page 92 Feedback Loop with "lookup" ------------------------------------------------------------------------ 0 "?" Pipe = Pipe ||, , /*------------------------------------------------------------*/ , /* This is the pipe that feeds Lookup's Quaternary Input, */ , /* which is used to make dynamic deletions from the Master */ , /* Table. Lookup will consume these records immediately it */ , /* receives them. */ , /*------------------------------------------------------------*/ " LkDel: Faninany", /* Dynamic deletions from the master file */ "| Gate:", /* Guarantee eof */ "| Look:", /* Send them to Lookup's Quaternary Input */ "?" Pipe = Pipe ||, " Delete:", , /*------------------------------------------------------------*/ , /* This pipe-set processes "DELETE" commands ... */ , /*------------------------------------------------------------*/ "| DelOK: StrFind /0/", /* Nothing to delete? */ "| Spec /0/ 1", /* Flag next stage that we had a problem */ "2.8 3", /* The issuing userid */ "Word 2 12", /* The supplied command */ "/No entry to delete./ Nextword", /* The error message */ "| Outnext:", /* Pass the message to the Big Wide World */ "?" Pipe = Pipe ||, " DelOK:", "| Delcopy: Fanout", /* Make some copies */ , /*------------------------------------------------------------*/ , /* Use the first copy to append the update to the disk file. */ , /*------------------------------------------------------------*/ "| Spec 2.8 1", /* Userid */ "/D/ 9", /* "Deleted" flag */ "| Fileupd:", /* Update the disk file */ "?" Pipe = Pipe ||, " Delcopy:", , /*------------------------------------------------------------*/ , /* Use the second copy to update the Lookup Master Table. */ , /*------------------------------------------------------------*/ "| Spec 2.8 1", /* Key for delete request */ "| LkDel:", "?" Pipe = Pipe ||, " Delcopy:", , /*------------------------------------------------------------*/ , /* Use the third copy to generate an output record. */ , /*------------------------------------------------------------*/ "| Spec /1/ 1", /* Flag next stage that we didn't have a problem */ "2.8 3", /* The issuing userid */ "Word 2 12", /* The supplied command */ "/Successfully deleted./ Nextword", /* The message */ "| Outnext:", /* Pass the message to the Big Wide World */ "?" 1 0 Feedback Loop with "lookup" Page 93 ------------------------------------------------------------------------ 0 Pipe = Pipe ||, " Query:", /* QUERY requests */ , /*------------------------------------------------------------*/ , /* This pipe-set processes "QUERY" commands ... */ , /*------------------------------------------------------------*/ "| QryOK: StrFind /0/", /* Nothing to query? */ "| Chop X00", /* Discard old Master-file info */ "| Spec /0/ 1", /* Flag next stage that we had a problem */ "2.8 3", /* The issuing userid */ "Word 2 12", /* The supplied command */ "/No entry to query./ Nextword", /* The error message */ "| Outnext:", /* Pass the message to the Big Wide World */ "?" Pipe = Pipe ||, " QryOK:", "| Not Chop AFTER X00", /* Extract Master-file info */ "| Spec /1/ 1", /* Flag next stage that we didn't have a problem */ "2.8 3", /* The issuing userid */ "/QUERY/ 12", /* The command */ "Words 2-* Nextword", /* The Master Data */ "| Outnext:", /* Pass the message to the Big Wide World */ "?" Pipe = Pipe ||, " Change:", /* CHANGE requests */ , /*------------------------------------------------------------*/ , /* This pipe-set processes "CHANGE" commands ... */ , /*------------------------------------------------------------*/ "| ChgOK: Strfind /0/", /* Nothing to change? */ "| Spec /0/ 1", /* Flag next stage that we had a problem */ "2.8 3", /* The issuing userid */ "Word 2 12", /* The supplied command */ "/No entry to change./ Nextword", /* The error message */ "| Outnext:", /* Pass the message to the Big Wide World */ "?" Pipe = Pipe ||, " ChgOK:", "| Spec 2.8 1", /* Format the data ... */ "Words 3-* 10", /* ... for the Master File */ "| Chgcopy: Fanout", /* Make some copies */ , /*------------------------------------------------------------*/ , /* Use the first copy to append the update to the disk file. */ , /*------------------------------------------------------------*/ "| Fileupd:", "?" Pipe = Pipe ||, " Chgcopy:", , /*------------------------------------------------------------*/ , /* Use the second copy to delete the current entry in the */ , /* Lookup Master Table. */ , /*------------------------------------------------------------*/ "| Chop 8", /* Lookup needs only the key for delete */ "| LkDel:", "?" 1 0 Page 94 Feedback Loop with "lookup" ------------------------------------------------------------------------ 0 Pipe = Pipe ||, " Chgcopy:", , /*------------------------------------------------------------*/ , /* Use the third copy to add the updated entry to the Lookup */ , /* Master Table. */ , /*------------------------------------------------------------*/ "| LkAdd:", "?" Pipe = Pipe ||, " Chgcopy:", , /*------------------------------------------------------------*/ , /* Use the fourth copy to generate an output record. */ , /*------------------------------------------------------------*/ "| Spec /1/ 1", /* Flag next stage that we didn't have a problem */ "1.8 3", /* The issuing userid */ "/CHANGE/ 12", /* The command */ "/Successfully changed./ Nextword", /* The message */ "| Outnext:", /* Pass the message to the Big Wide World */ "?" Pipe = Pipe ||, " Add:", , /*------------------------------------------------------------*/ , /* This pipe-set processes "ADD" commands ... */ , /*------------------------------------------------------------*/ "| AddOK: Strfind /1/", /* Already known? */ "| Spec /0/ 1", /* Flag next stage that we had a problem */ "2.8 3", /* The issuing userid */ "Word 2 12", /* The supplied command */ "/Already on file./ Nextword", /* The error message */ "| Outnext:", /* Pass the message to the Big Wide World */ "?" Pipe = Pipe ||, " AddOK:", "| Spec 2.8 1", /* Format the data ... */ "Words 3-* 10", /* ... for the Master File */ "| Addcopy: Fanout", /* Make some copies */ , /*------------------------------------------------------------*/ , /* Use the first copy to append the update to the disk file. */ , /*------------------------------------------------------------*/ "| Fileupd:", "?" Pipe = Pipe ||, " Addcopy:", , /*------------------------------------------------------------*/ , /* Use the second copy to add the updated entry to the Lookup */ , /* Master Table. */ , /*------------------------------------------------------------*/ "| LkAdd:", "?" Pipe = Pipe ||, " Addcopy:", , /*------------------------------------------------------------*/ , /* Use the third copy to generate an output record. */ 1 0 Feedback Loop with "lookup" Page 95 ------------------------------------------------------------------------ 0 , /*------------------------------------------------------------*/ "| Spec /1/ 1", /* Flag next stage that we didn't have a problem */ "1.8 3", /* The issuing userid */ "/ADD/ 12", /* The command */ "/Successfully added./ Nextword", /* The message */ "| Outnext:" /* Pass the message to the Big Wide World */ 0 Pipe /* Execute the pipe */ 0 Exit Rc