ABrief History of Just-In-Time


17 pages

Please download to get full document.

View again

of 17
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
ABrief History of Just-In-Time
  A Brief History of Just-In-Time JOHN AYCOCK University of Calgary Software systems have been using “just-in-time” compilation (JIT) techniques since the1960s. Broadly, JIT compilation includes any translation performed dynamically, after aprogram has started execution. We examine the motivation behind JIT compilation andconstraints imposed on JIT compilation systems, and present a classification scheme forsuch systems. This classification emerges as we survey forty years of JIT work, from1960–2000.Categories and Subject Descriptors: D.3.4 [ Programming Languages ]: Processors;K.2 [ History of Computing  ]:  Software General Terms: Languages, Performance Additional Key Words and Phrases: Just-in-time compilation, dynamic compilation 1. INTRODUCTION Those who cannot remember the past are con-demned to repeat it. George Santayana, 1863–1952  [Bartlett 1992] This oft-quoted line is all too applicablein computer science. Ideas are generated,explored, set aside—only to be reinventedyears later. Such is the case with whatis now called “just-in-time” (JIT) or dy-namic compilation, which refers to trans-lation that occurs after a program beginsexecution.Strictly speaking, JIT compilation sys-tems (“JIT systems” for short) are com-pletely unnecessary. They are only ameans to improve the time and space ef-ficiency of programs. After all, the centralproblem JIT systems address is a solvedone: translating programming languages This work was supported in part by a grant from the National Science and Engineering Research Council of Canada. Author’saddress:DepartmentofComputerScience,UniversityofCalgary,2500UniversityDr.N.W.,Calgary, Alta., Canada T2N 1N4; email: aycock@cpsc.ucalgary.ca.Permission to make digital/hard copy of part or all of this work for personal or classroom use is grantedwithout fee provided that the copies are not made or distributed for profit or commercial advantage, thecopyright notice, the title of the publication, and its date appear, and notice is given that copying is bypermission of ACM, Inc. To copy otherwise, to republish, to post on servers, or to redistribute to lists requiresprior specific permission and/or a fee.c  2003 ACM 0360-0300/03/0600-0097 $5.00 into a form that is executable on a targetplatform.What is translated? The scope and na-ture of programming languages that re-quire translation into executable formcovers a wide spectrum. Traditional pro-gramming languages like Ada, C, andJava are included, as well as little lan-guages [Bentley 1988] such as regularexpressions.Traditionally, there are two approachestotranslation:compilationandinterpreta-tion. Compilation translates one languageinto another—C to assembly language, forexample—with the implication that thetranslated form will be more amenableto later execution, possibly after furthercompilation stages. Interpretation elimi-nates these intermediate steps, perform-ing the same analyses as compilation, butperforming execution immediately.  ACM Computing Surveys, Vol. 35, No. 2, June 2003, pp. 97–113.  98  Aycock JIT compilation is used to gain the ben-efits of both (static) compilation and inter-pretation. These benefits will be broughtout in later sections, so we only summa-rize them here:—Compiled programs run faster, espe-cially if they are compiled into a formthat is directly executable on the under-lying hardware. Static compilation canalso devote an arbitrary amount of timeto program analysis and optimization.Thisbringsustotheprimaryconstrainton JIT systems: speed. A JIT systemmust not cause untoward pauses in nor-mal program execution as a result of itsoperation.—Interpreted programs are typicallysmaller, if only because the represen-tation chosen is at a higher level thanmachine code, and can carry much moresemantic information implicitly.—Interpreted programs tend to bemore portable. Assuming a machine-independent representation, such ashigh-level source code or virtual ma-chine code, only the interpreter need besupplied to run the program on a differ-ent machine. (Of course, the programstill may be doing nonportable opera-tions, but that’s a different matter.)—Interpreters have access to run-timeinformation, such as input parame-ters, control flow, and target machinespecifics. This information may changefrom run to run or be unobtainableprior to run-time. Additionally, gather-ing some types of information about aprogram before it runs may involve al-gorithms which are undecidable using static analysis.To narrow our focus somewhat, weonly examine software-based JIT systemsthat have a nontrivial translation aspect.Keppeletal.[1991]eloquentlybuiltanar-gument for the more general case of run-time code generation, where this latter re-striction is removed.Note that we use the term  execution  ina broad sense—we call a program repre-sentation executable if it can be executedby the JIT system in any manner, eitherdirectly as in machine code, or indirectlyusing an interpreter. 2. JIT COMPILATION TECHNIQUES Work on JIT compilation techniques oftenfocuses around implementation of a par-ticular programming language. We havefollowed this same division in this sec-tion,orderingfromearliesttolatestwherepossible. 2.1. Genesis Self-modifying code has existed since theearliestdaysofcomputing,butweexcludethat from consideration because there istypically no compilation or translation as-pect involved.Instead, we suspect that the earliestpublished work on JIT compilation wasMcCarthy’s [1960] LISP paper. He men-tioned compilation of functions into ma-chinelanguage,aprocessfastenoughthatthe compiler’s output needn’t be saved.This can be seen as an inevitable result of having programs and data share the samenotation [McCarthy 1981]. Another early published reference toJIT compilation dates back to 1966. TheUniversity of Michigan Executive Systemfor the IBM 7090 explicitly notes that theassembler [University of Michigan 1966b,p. 1] and loader [University of Michigan1966a, p. 6] can be used to translate andloadduringexecution.(Themanual’spref-ace says that most sections were writtenbefore August 1965, so this likely datesback further.)Thompson’s [1968] paper, published in CommunicationsoftheACM  ,isfrequentlycited as “early work” in modern publi-cations. He compiled regular expressionsinto IBM 7094 code in an ad hoc fashion,code which was then executed to performmatching. 2.2. LC 2 The Language for Conversational Com-puting, or LC 2 , was designed for in-teractive programming [Mitchell et al.1968]. Although used briefly at Carnegie-Mellon University for teaching, LC 2 was  ACM Computing Surveys, Vol. 35, No. 2, June 2003.   Brief History of Just-In-Time  99 Fig. 1 . The time-space tradeoff. primarily an experimental language[Mitchell 2000]. It might otherwise beconsigned to the dustbin of history, if not for the techniques used by Mitchellin its implementation [Mitchell 1970],techniques that later influenced JITsystems for Smalltalk and Self.Mitchell observed that compiled codecan be derived from an interpreter at run-time, simply by storing the actions per-formed during interpretation. This onlyworks for code that has been executed,however—he gave the example of an if-then-else statement, where only the else-part is executed. To handle such cases,code is generated for the unexecuted partwhich reinvokes the interpreter should itever be executed (the then-part, in theexample above). 2.3. APL The seminal work on efficient APLimplementation is Abrams’ disserta-tion [Abrams 1970]. Abrams concoctedtwo key APL optimization strategies,which he described using the connotativeterms drag-along and beating .Drag-along defers expression evaluation as long aspossible, gathering context information inthe hopes that a more efficient evaluationmethod might become apparent; thismight now be called  lazy evaluation .Beating is the transformation of code toreduce the amount of data manipulationinvolved during expression evaluation.Drag-along and beating relate to JITcompilation because APL is a very dy-namic language; types and attributes of data objects are not, in general, knownuntil run-time. To fully realize these op-timizations’ potential, their applicationmust be delayed until run-time informa-tion is available. Abrams’ “APL Machine” employed twoseparate JIT compilers. The first trans-lated APL programs into postfix code fora D-machine, 1 which maintained a bufferof deferred instructions. The D-machineactedasan“algebraicallysimplifyingcom-piler” [Abrams 1970, p. 84] which wouldperform drag-along and beating at run-time, invoking an E-machine to executethe buffered instructions when necessary. Abrams’ work was directed towardan architecture for efficient support of  APL, hardware support for high-level lan-guagesbeingapopularpursuitofthetime. Abramsneverbuiltthemachine,however;an implementation was attempted a fewyearslater[SchroederandVaughn1973]. 2 The techniques were later expanded uponby others [Miller 1977], although the ba-sic JIT nature never changed, and wereused for the software implementation of Hewlett-Packard’s APL \ 3000 [Johnston1977; van Dyke 1977]. 2.4. Mixed Code, Throw-Away Code,and BASIC The tradeoff between execution time andspaceoftenunderliestheargumentforJITcompilation. This tradeoff is summarizedin Figure 1. The other consideration isthat most programs spend the majority of timeexecutingaminorityofcode,basedondata from empirical studies [Knuth 1971].Two ways to reconcile these observationshave appeared: mixed code and throw-away compiling.  Mixed code  refers to the implementa-tion of a program as a mixture of nativecode and interpreted code, proposed in-dependently by Dakin and Poole [1973]and Dawson [1973]. The frequently ex-ecuted parts of the program would be 1 Presumably  D  stood for  Deferral  or  Drag-Along . 2 In the end, Litton Industries (Schroeder and Vaughn’s employer) never built the machine[Mauriello 2000].  ACM Computing Surveys, Vol. 35, No. 2, June 2003.  100  Aycock in native code, the infrequently executedparts interpreted, hopefully yielding asmaller memory footprint with little or noimpactonspeed.Afine-grainedmixtureisimplied: implementing the program withinterpretedcodeandthelibrarieswithna-tive code would  not  constitute mixed code. A further twist to the mixed code ap-proach involved customizing the inter-preter [Pittman 1987]. Instead of mixing native code into the program, the na-tive code manifests itself as special vir-tual machine instructions; the program isthen compiled entirely into virtual ma-chine code.The basic idea of mixed code, switch-ing between different types of executablecode, is still applicable to JIT systems, al-though few researchers at the time ad- vocated generating the machine code atrun-time. Keeping both a compiler and aninterpreter in memory at run-time mayhavebeenconsideredtoocostlyonthema-chines of the day, negating any programsize tradeoff.Thecaseagainstmixedcodecomesfromsoftware engineering [Brown 1976]. Evenassuming that the majority of code will beshared between the interpreter and com-piler, there are still two disparate piecesof code (the interpreter proper and thecompiler’s code generator) which must bemaintainedandexhibitidenticalbehavior.(Proponents of partial evaluation, orprogram specialization, will note that thisis a specious argument in some sense, be-cause a compiler can be thought of as aspecialized interpreter [Jones et al. 1993].However, the use of partial evaluationtechniques is not currently widespread.)This brings us to the second man-ner of reconciliation: throw-away compil-ing [Brown 1976]. This was presentedpurely as a space optimization: insteadof static compilation, parts of a programcould be compiled dynamically on an as-needed basis. Upon exhausting memory,some or all of the compiled code could bethrown away; the code would be regener-ated later if necessary.BASIC was the testbed for throw-away compilation. Brown [1976] essen-tially characterized the technique as agood way to address the time-space trade-off; Hammond [1977] was somewhat moreadamant, claiming throw-away compila-tion to be superior except when memoryis tight. A good discussion of mixed code andthrow-away compiling may be foundin Brown [1990]. 2.5. FORTRAN Some of the first work on JIT systemswhere programs automatically optimizetheir “hot spots” at run-time was due toHansen [1974]. 3 He addressed three im-portant questions:(1) What code should be optimized?Hansen chose a simple, low-costfrequency model, maintaining afrequency-of-execution counter foreach block of code (we use the genericterm  block  to describe a unit of code; the exact nature of a block isimmaterial for our purposes).(2) When should the code be optimized?The frequency counters served a sec-ond rˆole: crossing a threshold valuemade the associated block of code acandidate for the next “level” of op-timization, as described below. “Su-pervisor” code was invoked betweenblocks, which would assess the coun-ters,performoptimizationifnecessary,and transfer control to the next blockof code. The latter operation could be adirect call, or interpreter invocation—mixedcodewassupportedbyHansen’sdesign.(3) How should the code be optimized? A set of conventional machine-independent and machine-dependentoptimizations were chosen and or-dered, so a block might first be opti-mized by constant folding, by commonsubexpression elimination the second 3 Dawson[1973]mentioneda1967reportbyBarbieriand Morrissey where a program begins execution ininterpretedform,andfrequentlyexecutedparts“canbeconvertedtomachinecode.”However,itisnotclearif the conversion to machine code occurred at run-time. Unfortunately, we have not been able to obtainthe cited work as of this writing.  ACM Computing Surveys, Vol. 35, No. 2, June 2003.
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks