Over the past year, the WebKit project made tremendous progress on the ability to optimize JavaScript applications. A major part of that effort was the introduction of the Fourth Tier LLVM (FTL) JIT. The Fourth Tier JIT targets long-running JavaScript content and performs a level of optimization beyond WebKit's interpreter, baseline JIT, and high-level optimizing JIT. See the FTL Optimization Strategy section below for more on WebKit's tiered optimizations. The engineering advancements within WebKit that made the FTL possible were described by Filip Pizlo in the Surfin' Safari Blog post, Introducing the WebKit FTL JIT. On April 29, 2014, the WebKit team enabled FTL by default on trunk: r167958.
This achievement also represents a significant milestone for the LLVM community. FTL makes it clear that LLVM can be used to accelerate a dynamically type checked languages in a competitive production environment. This in itself is a tremendous success story and shows the advantage of the highly modular and flexible design of LLVM. It is the first time that the LLVM infrastructure has supported self-modifying code, and the first time profile guided information has been used inside the LLVM JIT. Even though this project pioneered new territory for LLVM, it was in no way an academic exercise. To be successful, FTL must perform at least as well as non-FTL JavaScript engines in use today across a range of workloads without compromising reliability. This post describes the technical aspects of that accomplishment that relate to LLVM and future opportunities for LLVM to improve JIT compilation and the LLVM infrastructure overall.
Read on for more information.
LLVM Project News and Details from the Trenches
Showing posts with label jit. Show all posts
Showing posts with label jit. Show all posts
Wednesday, July 16, 2014
Friday, August 2, 2013
Object Caching with the Kaleidoscope Example Program
In previous posts I described the process of porting the LLVM Kaleidoscope tutorial program to use MCJIT as its execution engine and introduced a lazy compilation implementation with the MCJIT engine. The lazy implementation produced similar, and in some cases better, performance when compared with an implementation based on the older JIT execution engine, but it used more memory.
In this post, I’m going to extend the new implementation to use MCJIT’s object caching interface. This will give our interpreter a way to store pre-compiled versions of previously used function and retrieve them for execution in later runs of the program.
In this post, I’m going to extend the new implementation to use MCJIT’s object caching interface. This will give our interpreter a way to store pre-compiled versions of previously used function and retrieve them for execution in later runs of the program.
Monday, July 29, 2013
Kaleidoscope Performance with MCJIT
In a previous post I described the process of porting the LLVM Kaleidoscope tutorial program to use MCJIT as its execution engine. After navigating through a serious of road blocks we ended up with an implementation that was working as expected.
So it works, but the next question is, “Is it any good?”
A lot of people considering the transition from the older JIT execution engine to MCJIT have concerns about the possible performance implications, particularly related to the fact that MCJIT doesn’t support lazy compilation. The older JIT engine will generate code for functions in an LLVM module one function at a time, delaying compilation of each function until it is about to be executed. The MCJIT engine operates on entire modules, generating code for all functions in a module at once. In the previous post, we modified the Kaleidoscope interpreter to create multiple modules as needed, but we’re still compiling the entire current module when a function is executed.
So what does that look like in terms of performance?
So it works, but the next question is, “Is it any good?”
A lot of people considering the transition from the older JIT execution engine to MCJIT have concerns about the possible performance implications, particularly related to the fact that MCJIT doesn’t support lazy compilation. The older JIT engine will generate code for functions in an LLVM module one function at a time, delaying compilation of each function until it is about to be executed. The MCJIT engine operates on entire modules, generating code for all functions in a module at once. In the previous post, we modified the Kaleidoscope interpreter to create multiple modules as needed, but we’re still compiling the entire current module when a function is executed.
So what does that look like in terms of performance?
Monday, July 22, 2013
Using MCJIT with the Kaleidoscope Tutorial
You may have noticed that there are two different JIT
execution engines in the LLVM project.
The older implementation (llvm::JIT) is a sort of ad hoc implementation
that brings together various pieces of the LLVM code generation and adds its
own glue to get dynamically generated code into memory one function at a
time. The newer implementation
(llvm::MCJIT) is heavily based on the core MC library and emits complete object
files into memory then prepares them for execution.
MCJIT has several advantages, including broader platform
support and better tool integration.
However, because it is designed to compile entire modules into object
images the MCJIT engine doesn’t directly support some key features of the older
JIT implementation, such as lazy compilation.
By lazy compilation, I mean deferring compilation of individual functions
until just before the function is going to be executed.
At this point you may find yourself saying, “Wait a
minute? Are you saying MCJIT
doesn’t do ‘just-in-time’ compilation?!?”
Well…sort of. It’s more of a
dynamic code emitter than a true just-in-time compiler. That said we’d like it to become a long term
replacement for the old JIT so that we can reap the benefits of ongoing development
in core MC code generation.
So the question becomes, can we make MCJIT do what the older
JIT engine does? The current answer is,
“I hope so.” As a means of exploring
this question, I decided to try to convert the Kaleidoscope tutorial to use MCJIT.
Subscribe to:
Posts (Atom)