Julia logo

NumFOCUS congratulates the Julia community on their achievement of the much-anticipated 1.0 release—the culmination of nearly a decade of work to build a language for “greedy programmers.” JuliaCon2018 celebrated the event with a reception where the community officially set the version to 1.0.0 together.

Julia was first publicly announced with a number of strong demands on the language:

“We want a language that’s open source, with a liberal license. We want the speed of C with the dynamism of Ruby. We want a language that’s homoiconic, with true macros like Lisp, but with obvious, familiar mathematical notation like Matlab. We want something as usable for general programming as Python, as easy for statistics as R, as natural for string processing as Perl, as powerful for linear algebra as Matlab, as good at gluing programs together as the shell. Something that is dirt simple to learn, yet keeps the most serious hackers happy. We want it interactive and we want it compiled.”

Since then, Julia has been downloaded over two million times and has cultivated a large, vibrant and thriving community around that goal. Over 700 people have contributed to Julia itself and even more have made thousands of amazing open source Julia packages. (Check out new NumFOCUS sponsored project, JuMP, for one example!) In 2017, the Julia application called Celeste achieved the rare milestone of peak performance of 1.54 petaflops per second, joining the “Petaflop Club.”

“It is this community that made Julia what it is today.”

Julia creators Jeff Bezanson, Stefan Karpinski, Viral Shah, and Alan Edelman shared their reflections: “When we first started writing Julia in August 2009, we never imagined it will come this far. Our 2012 blog post ‘Why we created Julia’ resonated with the community, and it is this community that made Julia what it is today. Merging the 1.0 Pull Request at JuliaCon, with over 500 people in person and on the live stream, was an emotional experience and something that we will all cherish forever. It has been an amazing 9 years and we have enjoyed every little bit of it – every line of code, every argument, every JuliaCon, the relationships we formed and the knowledge we gained.”

All told, the community has built a language that is:

  • Fast: Julia was designed from the beginning for high performance. Julia programs compile to efficient native code for multiple platforms via LLVM.
  • General: It uses multiple dispatch as a paradigm, making it easy to express many object-oriented and functional programming patterns. The standard library provides asynchronous I/O, process control, logging, profiling, a package manager, and more.
  • Dynamic: Julia is dynamically-typed, feels like a scripting language, and has good support for interactive use.
  • Technical: It excels at numerical computing with a syntax that is great for math, many supported numeric data types, and parallelism out of the box. Julia’s multiple dispatch is a natural fit for defining number and array-like data types.
  • Optionally typed: Julia has a rich language of descriptive data types, and type declarations can be used to clarify and solidify programs.
  • Composable: Julia’s packages naturally work well together. Matrices of unit quantities, or data table columns of currencies and colors, just work — and with good performance.

Julia is a testament to a diverse distributed team building open scientific software.”

“I first used the Julia language back when it was version 0.3,” said NumFOCUS President, Andy Terrel, Ph.D. “The promise of speed and flexibility was enticing but those early days were a challenge for stability. Through a truly herculean effort by the community, Julia has reached a stable point in only 5 years. Often it can take a decade or more to achieve such a milestone. Julia is a testament to a diverse distributed team building open scientific software. We at NumFOCUS are proud to have supported the team on its journey.“

Julia became a NumFOCUS sponsored project in 2014.

Julia 1.0 Highlights

The single most significant feature of Julia 1.0 is a commitment to language API stability: code you write for Julia 1.0 will continue to work in Julia 1.1, 1.2, etc. The language is “fully baked.”

1.0 also brings with it a significant number of new and innovative features:

  • Julia now has a canonical representation for missing values—a fundamental requirement for statistics and data science. In typical Julian fashion, the standard solution is both fully general and high-performance: collections of any built-in or user-defined type can efficiently support missing values; and the performance matches that of hand-crafted C++ implementations restricted to built-in primitive types in other systems. This makes Julia a state-of-the-art, first-class environment for data analysis.
  • To support the new missingness design, a whole suite of new compiler optimizations were implemented to efficiently handle the possibility that a given value might be one of a small handful of types. These “union-splitting” optimizations allow Julia to generate fast, machine-native code with the bare minimum of overhead while still supporting fully-generic code that isn’t overly concerned about types.
  • Named tuples: a new data type and special syntax further simplify many data manipulation tasks. For example, named tuples can concisely and efficiently describe a row of a database with heterogeneous data: `(name=”Julia”, version=v”1.0.0”, releases=8)`.
  • A brand new package manager makes it easier than ever to describe installed packages and dependencies in a reproducible manner, with first-class support for proprietary and private packages. Package registration is now federated, allowing companies and organizations to provide their own canonical source for the supported packages (and versions thereof). Individual projects and packages can now depend on very specific versions of their dependencies to improve reproducibility, and multiple sets of installed packages can co-exist on the same computer through distinct environments. More than that, it’s easier and faster than ever to install and update packages with the new REPL mode.

The language itself is lighter than ever with many of its components split out into packages that ship by default as part of the standard library.

New Packages Based on 1.0 Capabilities

A number of new external packages are being built specifically around the new capabilities of Julia 1.0. For example:

  • The data processing and manipulation ecosystem is being revamped to take advantage of the new missingness support.
  • Cassette.jl provides a powerful mechanism to inject code-transformation passes into the just-in-time compiler, enabling post-hoc analysis and extension of existing Julia code. Beyond instrumentation for programmers like profiling and debugging, this can even implement automatic differentiation for machine learning tasks.
  • Heterogeneous architecture support has been greatly improved, and further decoupled from the internals of the Julia compiler. Intel KNLs just work in Julia. Nvidia GPUs are programmed using the CUDANative.jl package, and a port to Google TPUs is in the works.

Visit the Julia Blog for additional details on exciting new and improved features for Julia 1.0, including an extensible interface for broadcasting, a smarter optimizer, and more. For a complete list of changes, see the 0.7 NEWS file.

Download Julia 1.0

Try Julia by downloading version 1.0 now. If you’re upgrading code from Julia 0.6 or earlier, we encourage you to first use the transitional 0.7 release, which includes deprecation warnings to help guide you through the upgrade process. Once your code is warning-free, you can change to 1.0 without any functional changes. The registered packages are in the midst of taking advantage of this stepping stone and releasing 1.0-compatible updates.

 

Julia in Research and Industry

In the years leading up to the 1.0 release, Julia has achieved widespread adoption across industry and research sectors. Julia has been used by researchers with the FAA as a specification language for the next-generation Airborne Collision Avoidance System and to examine the effect of the No Child Left Behind act in Wisconsin.

Dozens of universities offer courses using Julia for teaching, including MIT, Sciences Po Paris, and Stanford. Lawrence Berkeley National Laboratory, Lawrence Livermore National Laboratory, Los Alamos National Laboratory, National Energy Research Scientific Computing Center, National Renewable Energy Laboratory, and Oak Ridge National Laboratory all have a DOE lab using Julia.

Numerous companies make use of Julia in industry, including AOT Energy, Augmedics, Aviva, Berkery Noyes, BestX, Gambit Research, Lincoln Labs, Path BioAnalytics, Tangent Works, Voxel8, Amazon, Apple, BlackRock, BNDES, Capital One, Comcast, Disney, Facebook, Federal Reserve Bank of New York, Ford, IBM, KPMG, Microsoft, NASA, Oracle, and PWC.

 

NumFOCUS looks forward to celebrating many more achievements by the Julia community and congratulates everyone whose contributions helped make possible the 1.0 achievement!