ScalaDays 2011 Resources

Below, you'll find links to any publicly-available material relating to presentations given at ScalaDays 2011.

This includes, but is not limited to:

  • slides
  • videos
  • projects referenced
  • source code
  • blog articles
  • follow-ups / corrections

Overview

Articles

Recap by Martin Odersky

Day 0 (Wednesday, June 1st)

16:15 - 17:30

Martin presented a talk on 'Future-proofing Scala collections' at the Stanford EE Computer Systems Colloquium.

Day 1 (Thursday, June 2nd)

09:00 - 10:00

Keynote: Martin Odersky - State of Scala

Scala is a unique combination of cutting-edge programming language research and down-to-earth practicality. The setup of the ScalaDays conference is a great testimony for that.

In my talk, I will give an outline of recent developments of the language and its ecosystem, and also will introduce some of the projects that are currently underway.

10:25 - 12:05

WS Session 1

MUTS: Native Scala Constructs for Software Transactional Memory

Daniel Goodman, Behram Khan, Salman Khan, Chris Kirkham, Mikel Lujan and Ian Watson

In this paper we argue that the current approaches to implementing transactional memory in Scala, while very clean, adversely affect the programmability, readability and maintainability of transactional code. These problems occur out of a desire to avoid making modifications to the Scala compiler. As an alternative we introduce Manchester University Transactions for Scala (MUTS), which instead adds keywords to the Scala compiler to allow for the implementation of transactions through traditional block syntax such as that used in ``while'' statements. This allows for transactions that do not require a change of syntax style and do not restrict their granularity to whole classes or methods. While implementing MUTS does require some changes to the compiler's parser, no further changes are required to the compiler. This is achieved by the parser describing the transactions in terms of existing constructs of the abstract syntax tree, and the use of Java Agents to rewrite to resulting class files once the compiler has completed.

Paper | Slides | Video

Setac: A Framework for Phased Deterministic Testing of Scala Actor Programs Programs

Samira Tasharofi, Milos Gligoric, Darko Marinov, and Ralph Johnson

Scala provides an actor library where computation entities, called actors, communicate by exchanging messages. The schedule of message exchanges is in general non-deterministic. Testing non-deterministic programs is hard, because it is necessary to ensure that the system under test has executed all important schedules. Setac is our proposed framework for testing Scala actors that (1) allows programmers to specify constraints on schedules and (2) makes it easy to check test assertions that require actors to be in a stable state. Setac requires little change to the program under test and requires no change to the actor run-time system. In sum, Setac aims to make it much simpler to test non-deterministic actor programs in Scala.

Parallelizing Machine Learning (Functionally): A Framework and Abstractions for Parallel Graph Processing

Philipp Haller and Heather Miller

Implementing machine learning algorithms for large data, such as the Web graph and social networks, is challenging. Even though much research has focused on making sequential algorithms more scalable, their running times continue to be prohibitively long. Meanwhile, parallelization remains a formidable challenge for this class of problems, despite frameworks like MapReduce which hide much of the associated complexity. We present a framework for implementing parallel and distributed machine learning algorithms on large graphs, flexibly, through the use of functional programming abstractions. Our aim is a system that allows researchers and practitioners to quickly and easily implement (and experiment with) their algorithms in a parallel or distributed setting. We introduce functional combinators for the flexible composition of parallel, aggregation, and sequential steps. To the best of our knowledge, our system is the first to avoid inversion of control in a (bulk) synchronous parallel model.

TT Session 1: DSLs for Parallelism

All talks in this session provided by Stanford Pervasive Parallelism Laboratory

Liszt: A DSL for solving mesh-based PDEs

Zach DeVito

Liszt is a domain specific language that exposes a high level interface for mesh-based computation. This frees scientists from architecture-specific implementations and increases programmer productivity. Currently, the state of the art PDE solvers are tied to a specific platform and architecture. Extending these codes with new algorithms or to new hardware is tedious, and the scientist is distracted by low level decisions regarding the targeted architecture. Liszt code is portable across architectures, and provides high level abstractions without sacrificing performance by performing domain specific optimizations at compile time. We present our support for explicit and implicit methods supported by built-in mesh and sparse matrix structures, and portability results across SMPs, MPI-based clusters and GPUs.

Slides | Video | Home Page

Delite: A Framework for Heterogeneous Parallel DSLs

Kevin Brown

Computing systems are becoming increasingly parallel and heterogeneous, and therefore new applications must be capable of exploiting parallelism in order to continue achieving high performance. Unfortunately targeting these emerging devices often requires using multiple disparate programming models and making decisions that can limit forward scalability. Domain-specific languages (DSLs), however, can provide high-level abstractions that enable transformations to high performance parallel code without degrading programmer productivity. We present the Delite Compiler and Runtime environment, an end-to-end system for executing DSL applications on parallel heterogeneous hardware. The framework lifts embedded DSL applications to an intermediate representation (IR), performs general-purpose, parallel, and domain-specific optimizations, and generates an execution graph that targets multiple heterogeneous hardware devices.

OptiML: A DSL for machine learning

Arvind Sujeeth

As the size of datasets continues to grow, machine learning applications are becoming increasingly limited by the amount of available computational power. Taking advantage of modern hardware requires using multiple parallel programming models targeted at different devices (e.g. CPUs and GPUs). However, programming these devices to run efficiently and correctly is difficult, error-prone, and results in software that is harder to read and maintain. We present OptiML, a domain-specific language (DSL) for machine learning. OptiML is an implicitly parallel, expressive and high performance alternative to MATLAB and C++. OptiML performs domain-specific analyses and optimizations and automatically generates CUDA code for GPUs. We show that OptiML outperforms explicitly parallelized MATLAB code in nearly all cases.

Slides | Video | Home Page

TT Session 4: Scala enterprise experiences

Scala on Android: Real-world Experience at Bump Technologies

Michael Galpin

You might not have known that Bump, one of the most popular Android applications on the Market, was built using Scala. The were many factors in deciding to use Scala for Bump, and there have certainly been some tradeoffs. This talk will focus on these factors and tradeoffs, as well as a few lessons and tricks learned along the way.

Pedestrian Scala: Applying Scala to performance challenges in the Cable TV Industry

Jon Steelman

Real-world performance challenges in the Cable TV Industry with Interactive Advertising moving Significant Data Volume.

Given highly inadequate processing performance with a legacy JVM technology stack, would the scalability/performance promise and faster develop promise of Scala hold true even for a team new to Scala and under deadlines? Our team attempt at incremental performance improvements with the legacy JVM stack was making minimal progress.

We will share our experience assessing and trialing Scala in a practical business application for the Cable TV Industry where we went from an XML processing spike solution that ultimately let to a full production solution. Technical benefits as well as business benefits of Scala adoption will be discussed. Our experience can provide some insight into the applicability of Scala to a broader range of pedestrian IT applications that are typically developed in Java or Groovy.

Slides | Video

Scala at Bizo

Alex Boisvert

Bizo is high-tech startup company that progressively adopted Scala starting in the late 2009. This presentation will report on our experience using and deploying Scala within our internet marketing platform.

I'll review the small and not-so-small steps we took during our adoption, how we integrated Scala with our build and cloud deployment system, discuss interoperability and layering with existing internal and 3rd-party Java libraries and illustrate the convenience, expressiveness and performance afforded by Scala in various parts of our web analytics platform.
In particular, we'll demonstrate a scalable – so-called NoSQL or Big Data – multidimensional database backend built with Scala and designed to run on the Amazon Web Services cloud infrastructure (EC2, S3, ...).

The presentation is intended for individuals and companies considering or already in the process of adopting Scala. As part of the presentation, I'll share a few challenges our team faced during our adoption and how we dealt with them.

TT Session 17

Inhibitions: Reclaiming exceptions for better static safety in Scala

Jon Pretty

Java provides an exception-handling mechanism to allow certain events to interrupt the normal program flow. Whilst Scala retains this feature from Java, the constraint of requiring the programmer to either handle or declare all checked exceptions is not carried across. This has changed how exceptions are used in Scala, to the extent that some coding styles discourage the use of exceptions entirely. This talk describes an alternative approach to exception checking, nominally called inhibitions, which invert the logic on Java's exception declarations by permitting the programmer to declare methods as never throwing the type of exception specified, and having the compiler check this.

StagedSAC - embedded DSL for multidimensional arrays

Vlad Ureche

StagedSAC is an embedded DSL for operations on multidimensional arrays. The language is modeled after Single Assignment C and is embedded in Scala using the Lightweight Modular Staging framework. The main challenge encountered was the need to have a further type inference stage inside the embedded language, which cannot be offset to the Scala type inferencer, and is now done as an optimization pass. This talk will describe the concept of multidimensional arrays, the Single Assignment C language, how the need for a further type inference stage arose and how this was implemented in the Lightweight Modular Staging framework.

Slides | Video

Scala Domain Modeling and Architecture

Hossam Karim

We present how we introduced Scala to our clients as the main programming language to implement an OSGi based micro-kernel service container. We discuss the technology stack and architectural approaches including:

13:45 - 15:25

WS Session 2

Scala.react: Embedded Reactive Programming in Scala

Ingo Maier

In contrast to batch processing systems, interactive systems require substantial programming effort to continuously synchronize with their environment. We quickly review existing programming systems that address this task. We present scala.react, our approach to embedded reactive programming in Scala which combines many advantages of previous systems. We show how our implementation makes use of dynamic dataflow graphs, delimited continuations and implicit parameters in order to achieve certain semantic guarantees and concise syntax.

Paper | Slides | Video

Checking Flight Rules with TraceContract - Application of a Scala DSL for Trace Analysis

Howard Barringer, Klaus Havelund, Elif Kurklu, and Robert Morris

Typically during the design and development of a NASA space mission, rules and constraints are identified to help reduce reasons for failure during operations. These flight rules are usually captured in a set of indexed tables, containing rule descriptions, rationales for the rules, and other information. Flight rules can be part of manual operations procedures carried out by humans. However, they can also be automated, and either implemented as on-board monitors, or as ground based monitors, part of a ground data system. In the case of automated flight rules, one considerable expense to be addressed for any mission is the extensive process by which system engineers express flight rules in prose, software developers translate these requirements into code, and then both experts verify that the resulting application is correct. This paper explores the potential benefits of using an internal Scala DSL (Domain Specific Language) to write executable specifications of flight rules.

Lafros MaCS Programmable Devices: case study in type-safe API design using Scala

Rob Dickens

Lafros MaCS is an experimental Scala API for building distributed monitoring and control systems, and features reusable software modules known as Programmable Devices (PDs). Each such module provides a type-safe API for doing such things as registering a device-interface instance or writing a program to control the device. After introducing PDs, an explanation of how this type-safety is achieved, starting from the simplest of examples, then describing how various language features which Scala has to offer can play a part, will be followed by one of how the present framework was arrived at, to conclude.

TT Session 2: IDEs

ENSIME: The ENhanced Scala Interaction Mode for Emacs

Aemon Cannon

ENSIME is a new Scala environment for Emacs. It provides many common IDE features, such as live error-checking, symbol inspection, package/type browsing, and automated refactoring. This talk will give an overview of ENSIME's features and discuss aspects of its design, especially the client/server architecture and points of integration with the Scala Compiler.

Slides | Video

The Scala IDE for Eclipse reloaded

Iulian Dragos

In this talk I will present the new architecture of the presentation compiler and how the Eclipse IDE is using it to deliver a reliable, responsive Scala environment.

Moving from a text editor to a modern IDE requires tools that understand the source code. Most semantic actions, like content assist (code completion) or 'jump to definition' require full type-checking. The Scala type system is one of the most advanced type systems in use today, and the reference type checker implementation is roughly 15kLOC of Scala code. Rewriting that for an IDE would be a very tedious and error-prone task. Instead, we decided to use the existing Scala compiler, with the added benefit that it will always be up to date with the spec.
Type-checking is the single most time-consuming task in the compiler, therefore type-checking files at every keystroke was immediately ruled out. The result is an asynchronous, interruptible, targeted type-checker that can be asked to perform actions like retrieve type members at a given position, or find the definition of a given symbol.

Type Debugger

Hubert Plociniczak

Statically typed languages tend to require a large amount of type annotations in order to create valid code. The type inference that exists in advanced programming languages like Scala or Haskell allows for a reduction on the number of user-provided types making the code often easier to read and understand. Unfortunately it is not always possible to get rid of them for more complicated constructs and it might be hard to explain to the programmer the reasons for these limitations. Programmers therefore rely blindly on the processes of typechecking and type inference by treating them as an oracle judgement returning either a positive response or error message. Unfortunately the more advanced features a language uses, the less informative is the latter. Error messages also tend to become more cryptic, especially for beginner programmers, bringing disappointment and eventually increasing shift towards popularity of dynamically typed languages.

We believe that making a language more powerful and adding increasing advanced features has to come in parallel with ways of easing their understanding for programmers. Scala is often criticized for being too complicated, but in reality the complexity stems from the fact that programmers have to have a very good understanding of programming language concepts.

The aim of our project is to explain the working of this “black box” in an accessible way. By accessible we do not mean referring to academic papers or textbooks, but rather present it in a way that is more reasonable to humans, through visualization of typechecking. Therefore a type debugger in essence aims to be an educational tool for programmers who want to understand the static type system implementation in a powerful language like Scala. We believe that until now the subject of explaining the process of typechecking has been largely neglected and only few interesting projects exist, including Chameleon for a subset of Haskell (no longer maintained) and an extension of PLT Redex using term rewriting. None of the mentioned projects attempted at covering a full language, especially a hybrid functional- and object-oriented language like Scala.

Our tool instruments the standard Scala compiler to produce large amounts of traces that we currently call events. These are later translated into logical blocks that correspond to specific actions of the typechecker. Although in theory one could search for a specific type of events and just print them to the user, it can be easily noticed that understanding the working of the compiler through such filtering has only limited usability. Hence through our building blocks we abstract the work of the typechecker to a high-level representation which can be later manipulated, annotated with even more information and shown to the user. For instance, one of the most desired extensions for the type debugger would involve the possibility of showing implicit arguments and conversions at any given typechecking step. Interesting question is whether the data visualization technique that we chose (tree-based) is actually the most suitable and intuitive for this purpose. This is not a trivial question once we start dealing with large number of classes, traits and modules that involve complex relations between the types.

Apart from the educational purposes our tool is also useful for the Scala language developers. That is because for our prototype the compiler branch is annotated with a larger number of events than what is actually necessary to build the high-level representation of the typechecker. This way developers can produce more information about the regions of the compiler they are more interested in. Our aim is to allow switching between those modes (high-level and detailed) easily. We think that for the former it is enough to visualize only the behavior described in the language specification.

The current implementation is a simple prototype written using standard swing libraries. We believe it will be feasible to integrate it in the future with IDEs for a more interactive experience. A next step in the development of the type debugger would involve partial typechecker information retrieval - at the moment we run it on full applications which often might not be necessary.

Slides | Video

TT Session 5: ML / linear algebra

Spark: Fast, Interactive, Language-Integrated Cluster Computing

Matei Zaharia

Spark is an open source cluster computing framework that aims to both generalize the data flow programming model of MapReduce and make applications easier to write through a language-intergated Scala API. Spark provides fault-tolerant "distributed datasets" that can be manipulated through parallel operators like map, reduce, filter, and join, much like local collections, and can also be cached in memory on the cluster for future reuse. The ability to cache datasets makes Spark especially efficient for iterative applications, like machine learning and graph algorithms, where it can outperform Hadoop by 20x. Finally, we modified the Scala interpreter to make it possible to run Spark interactively to load big datasets into memory and query them, providing substantially lower latencies than Hadoop-based tools. We plan to demo this use case.

Spark is being used by machine learning researchers at Berkeley and engineers at Conviva to run in-memory analytics on hundreds of gigabytes of data.

Scalala: A Scalable Linear Algebra Library

Daniel Ramage

Numerical programming environments such as Matlab and R can easily express linear algebra as simple expressions involving matrices and vectors. However, general purpose programming with richer data structures in these environments ranges from slow to painful. Ideally, a programmer shouldn't have to choose between the convenience of a language designed for linear algebra and the performance of a general purpose programming language.
In this talk, I will introduce Scalala, a numerical linear algebra library for Scala. Scalala supports rich Matlab-like operators on vectors and matrices and a library of numerical and plotting routines. By borrowing from and extending the design of the Scala 2.8 collections, Scala's built-in matrix and vector types allow for linear algebra to be expressed succinctly and executed efficiently with syntax like: a.t * a + b * 2. The library also enriches Scala's built-in collection types with similar operators, allowing for mathematical operations on (nested) numerically valued collections like: Map('a' -> 1, 'b' -> 2) * 2 == Map('a' -> 2, 'b' -> 4).

I will present an overview of the design and implementation of Scalala and demonstrate how Scala's implicit resolution rules can be used to drop in highly optimized code-paths in a statically type-safe way. This enables Scalala to provide high performance, statically type-safe linear algebra directly in Scala. When combined with the Scala interpreter, Scalala is an able Scala-hosted interactive data analysis platform.

Slides | Video

Rogue: A type-safe query language for MongoDB

Jason Liszka, Jorge Ortiz

We present Rogue, a type-safe query language for MongoDB written in Scala. MongoDB is a NoSQL datastore for schema-less, JSON-like documents which supports a very rich query language. Even though MongoDB is schema-less, we use Lift's Record and Field libraries to define a typed schema for our Mongo collections, which enforces type safety for any code reading or writing from those collections. Rogue builds on top of Record and allows for type-safe creation of Mongo queries. By exploiting the type information present in the Record schema, Rogue protects at compile time against using operators on fields where they don't make sense (e.g., greater than on a List field, or size on an Int field). In addition, Rogue uses phantom types to ensure that certain modifiers for a query (e.g., skip, limit) aren't applied twice or in contradictory ways. We will discuss our experience using Rogue with a team of several dozen engineers at foursquare. Rogue has made our database query code more concise, easier to understand, and has eliminated entire classes of bugs from it. Finally, we discuss possible future directions for Rogue: compile time guarantees that queries make indexed lookups, and the ability to generate more complex MapReduce queries.

TT Session 7

Remote function applications: A framework towards Scala Grid Parallel Collections

Nermin Serifovic

If we want to get some piece of functionality executed on another server we typically think of remote actors. However, even though actor model might be good fit for solving concurrency problems, it is not necessarily the best weapon for attacking parallelism. One example are computationally intensive pure functions, which are stateless by their nature.

Once there is a framework in place for efficiently applying functions on remote computers and also composing them, the next question becomes: what does it take to bring Scala Parallel Collections to the next level? That is, how would it look like if we wanted to distribute processing collection operations across a compute grid?

The goal of this talk is to present such framework and the API for grid-parallel versions of several collection operations which involve function application (ex. map, flatMap, filter).

Furthermore, it will show a use case application built on this framework solving a more compute than data intensive problem.

Implementing Wadler's Arrow Calculus in Scala

Luc Duponcheel

This technical talk is about a case study to experiment with the expressive power of Scala.

Although I'm aware of the existence of libraries that deal with monads and related stuff, I wanted to approach their implementation form another point of view: less Haskell-like and more Scala-like.

Recently I discovered the paper "The Arrow Calculus", and I felt challenged to implement the ideas of the paper in Scala.
The result turned out, among others, to be an interesting case study for the usage of:

How Scala Experience Improved Our Java Development

Sam Reid

PhET Interactive Simulations at the University of Colorado creates free, open-source educational simulations. After developing several simulations in Scala, we identified several advantageous techniques and patterns in Scala which we were able to transfer to subsequent Java development projects. Specifically, our experience with Scala helped us attain the following advantages in our Java development: improved object initialization, improved code organization, reduced code duplication and improved management of boilerplate code. These effect of these changes has been to make our code easier to write, read and maintain. These ideas are not specific to our application domain, but should work equally well in broad range of domains. We also discuss how adoption of these Scala-like patterns in Java code can simplify the learning curve for Java developers who want to learn Scala.

15:50 - 17:30

WS Session 3

Loop Recognition in C++/Java/Go/Scala

Robert Hundt and Tipp Moseley

In this experience report we encode a well specified, compact benchmark in four programming languages, namely C++, Java, Go, and Scala. The implementations each use the languages’ idiomatic container classes, looping constructs, and memory/object allocation schemes. It does not attempt to exploit specific language and runtime features to achieve maximum performance. This approach allows an almost fair comparison of language features, code complexity, compilers and compile time, binary sizes, runtimes, and memory footprint. While the benchmark itself is simple and compact, it employs many language features, in particular, higher-level data structures (lists, maps, lists and arrays of sets and lists), a few algorithms (union/find, dfs / deep recursion, and loop recognition based on Tarjan), iterations over collection types, some object oriented features, and interesting memory allocation patterns. We do not explore any aspects of multi-threading, or higher level type mechanisms, which vary greatly between the languages. The benchmark points to very large differences in all examined dimensions of the language implementations. After publication of the benchmark internally at Google, several engineers produced highly optimized versions of the benchmark. While this whole effort is an anectodal comparison only, the benchmark and subsequent tuning effort might be indicatie of typical performance pain points in the respective languages.

Compiling Scala to LLVM

Geoffrey Reedy

This paper describes ongoing work to implement a new backend for the Scala compiler that targets the Low Level Virtual Machine (LLVM). LLVM aims to provide a universal intermediate representation for compilers to target and a framework for program transformations and analyses. LLVM also provides facilities for ahead-of-time and just-in-time native code generation. Targeting LLVM allows us to take advantage of this framework to compile Scala source code to optimized native executables. We discuss the design and implementation of our backend. We also outline the additional work needed to produce a robust backend.

Scala+GWT: Running Scala Code in a Browser

Lex Spoon

The Scala+GWT project is working to compile Scala code for running in a standard web browser. It uses the Google Web Toolkit (GWT) for the heavy lifting, and a new format called Jribble as an intermediate format that both Scala and GWT understand. In addition to letting you share code between the server and the client, this approach gives access to the substantial capabilities of GWT, such as code splitting, image spriting, CSS optimization, and templated UIs. In this talk I will describe the advantages of the approach, the main technical challenges, and the current status.

TT Session 3: Parallelism

Scala Parallel Collections

Aleksandar Prokopec

Parallel programming abstractions become increasingly important as the number of processor cores grows. A high-level programming model enables the programmer to focus more on the program and less on low-level details such as synchronization and load-balancing. Scala parallel collections extend the programming model of the Scala collection framework, providing parallel operations on datasets.

The talk will describe the architecture of the parallel collection framework, explaining their implementation and design decisions. Concrete collection implementations such as parallel hash maps and parallel hash tries will be described. Finally, several example applications will be shown, demonstrating the programming model in practice.

Functional Approach to Distributed Programming with GridGain and Scala

Nikita Ivanov

The topic of this presentation is about using Scala with GridGain framework to provide a simple and productive development platform for high performance distributed applications. 2/3 of the presentation will be devoted to live coding demonstration of writing basic MapReduce application in Scala DSL based on GridGain distributed runtime. All coding during demonstration will be done live. Overview of grid and cloud computing concepts will be discussed.

Slides | Video

Kafka - A distributed publish/subscribe messaging system

Neha Narkhede

Kafka is a distributed publish/subscribe messaging system aimed at providing a scalable, high-throughput solution for log aggregation and processing of all activity stream data on a consumer-scale website. Built on Apache Zookeeper using Scala, Kafka aims at unifying offline and online data processing by providing a mechanism for parallel data load into Hadoop as well as the ability to partition real-time consumption over a cluster of machines. Written by the Search, Network and Analytics team at LinkedIn, Kafka is open sourced under the Apache License. In this presentation, we will discuss some of the production applications of Kafka at LinkedIn. We will highlight the core design principles of Kafka and how those make it a good fit for both real time applications as well as offline analytical processing. Finally, we will briefly take a look at the performance metrics and future directions.

TT Session 6: DSLs II

Second Time's the Charm: Examining the Anti-XML Framework for Scala

Daniel Spiewak, David LaPalomento

The scala.xml framework bundled with the standard library has a lot of long-standing issues. Anti-XML is a clean room effort to replace this framework with something safer, more convenient and more performant. This talk will cover the design decisions that went into Anti-XML (providing some theoretical justification for a few of the more contentious ones). We will examine the general architecture of Anti-XML from a framework standpoint, the end-user API and performance with a particular emphasis on the areas in which Anti-XML exceeds the capabilities of scala.xml. Once we have the high-level overview out of the way, we will engage in some live-coding to demonstrate the framework in action.

Scala Integrated Query - more expressive database queries

Christopher Vogt

Scala Integrated Query is a prototype developed at LAMP, EPFL. It compiles a subset of Scala into SQL and executes it in the DBMS. Queries can result in single values or arbitrarily nested lists and tuples, which can require more than one SQL query. Compared to using SQL directly, using Scala Integrated Query can lead to more accurate code and makes it easier to achieve good performance for complex queries. It features greater expressiveness, type safety, familiar syntax and easy composability. Complex queries are efficiently mapped to SQL and automatically optimized. Avalanches of SQL queries are prevented, in particular when correlating data in main-memory with data in the database. This prototype was developed in a Master's project and builds up on research of University of Tübingen, namely Ferry and Pathfinder. Work will continue at Typesafe to turn the prototype into a production-ready library.

Hammurabi - A Scala rule engine

Mario Fusco

One of the most common reason why software projects fail, or suffer unbearable delays, is the misunderstandings between business analysts and developers. Indeed the latter write the business rules in a language that is completely obscure for the first ones and in this way the business analysts don't have a chance to read, understand and validate what the programmers developed. They can only empirically test the final software behavior, hardly covering all the possible corner cases and recognizing mistakes only when it is too late.

Hammurabi is an actor-based rule engine written in Scala that leverages the language's features making it particularly suitable to implement extremely readable internal DSL. What makes Hammurabi different from all other rule engines is that, despite its rules are written directly in the host language, they are also easily understandable even by non technical persons.

Slides | Video

TT Session 8

Scala from a Rubyist point of view

Rémy-Christophe Schermesser

Once upon a time, there was a language called Java. It was the state of the art to build Web applications - well there was nothing else. But it pained developers to use it, because it was too cumbersome, too heavy...

Then some developers decided to rebel and stab Java in the back by using other languages. One of these languages was Ruby. On top of it they built several web frameworks that were easy to use, and battling with web applications became fun again!
But Ruby was not alone. Other languages were there and wanted to have their own bunch of coders. In the dark Scala was waiting and growing... It had a lot of fine weapons to attract coders, and so coders came.

Now, on the raging battlefield of web development who will win: Ruby or Scala?

This talk will focus on comparing the two languages from a developer point of view, the easiness to test it, to efficiently produce code with it.

Unit testing the implicit methods that uses infrastructure code

Alberto Souza

Implicit method is one of the most used features of scala. But sometimes, when you want to test it, especially if it has some logic that needs infrastructure, like acessing a database, the setup can be hard. In this talk, it will be presented a situation, that we need to test our code that uses implicit method, but instead of use the implicit that is used for production code, we will use a mocked version of it for create our unit tests.

Slides | Video

ScalaU - Implementing a Scala library for Units of Measure

Adrian Fritsch

Units of Measure libraries or language extensions are available in some programming languages. Most of them, however, focus primarily on unit conversion, and few support compile-time dimensional analysis. We present the challenges in implementing Units of Measure support in general, and briefly present the features supported in other languages. We show how Scala, with its powerful type system, is a language well suited for implementing support for Units of Measure. A few possible Scala implementations are briefly described, with pros and cons. We then present ScalaU, a pragmatic, type-safe Scala library supporting seamless unit conversion, as well as customizable, compile-time dimensional analysis and unit inference. We show that, just as a strong type system can go a long way in proving program correctness, a strong units library can go a long way in proving correctness of scientific calculations.

Slides | Video

Day 2 (Friday, June 3rd)

09:00 - 10:00

Keynote: Doug Lea - Supporting the Many Flavors of Parallel Programming

Functional programming is mostly about evaluating (possibly parallelizable) expressions. Object-oriented programming is mostly about passing messages among (possibly autonomous) objects. These, and other familiar programming models do not transparently map to common platforms. This talk describes some of the ideas involved in supporting them, along with intermediary forms that come into play in effective concurrent programming.

10:25 - 12:05

TT Session 9

A tour of the repl's :power mode

Paul Phillips

Speaker is primarily responsible for the current state of the scala repl, unfortunately including the largely non-existent documentation of some of its most appealing features. I propose to take a step toward remedying that with a tour of what is possible, and then to field questions about the repl or any other aspect of the compiler and library.

Parallel Distributed Collections API

Josh Suereth and Daniel Mahler

This talk outlines the construction of a Parallel Distributed Collections API (Cascade) and outlines the challenges associated with developing a Scala API over the existing Java solution (Flume). The details of the JavaFlume library are discussed, including the relevant parallel collections abstractions and the parallel operations allowed against these collections. The current implementation of JavaFlume works with distributed sharded files and Google's BigTable.

The optimisation engine for Flume is its defining feature. High level operations like sort, map, reduce and join can be reduced into a series of map-reductions and executed against a cluster of machines. This allows users of the library to develop complex parallel operations in peicemeal fashion and construct a pipeline of data processing.

Cascade is a Scala built on top of Flume to take advantage of its functional nature. Challenges in developing Cascade and practical solutions will be examined in depth. Cascade presents a new way of performing Map Reduce operations that is innovate and elegant.

Effective Scala

Bill Venners and Dick Wall

Scala is a powerful, modern language with many features--so much good stuff, in fact, it can be sometimes hard to figure out what to do with it all. In this talk, Bill Venners and Dick Wall will walk you though some guidelines for the effective use of Scala's features. Half the talk will focus on coding-level practices, and the other half on library and DSL design guidelines.

TT Session 11: Akka

Project Hydrogen: Building a distributed compute platform for design engineering with Akka and Scala

Garrick Evans

I will be presenting an overview of the first year development of a new platform within my emerging technology group. This experience report will hopefully provide insights into using Scala and Akka within a commercial organization and production deployment. I intend to outline both benefits and challenges that I encountered during this phase of project and make an argument in support of participation in open source community.

Slides | Video

Above the Clouds: Introducing Cloudy Akka

Jonas Bonér

We believe that one should never have to choose between productivity and scalability, which has been the case with traditional approaches to concurrency and distribution. The cause of that has been the wrong tools and the wrong layer of abstraction — and Akka is here to change that. Akka is using the Actors together with Software Transactional Memory (STM) to create a unified runtime and programming model for scaling both UP (utilizing multi-core processors) and OUT (utilizing the grid/cloud). Cloudy Akka, an extension to Akka, provides location and network transparency by abstracting away both these tangents of scalability by turning them into an operations and configuration task. This gives the Cloudy Akka runtime freedom to do adaptive automatic load-balancing, cluster rebalancing, replication, fail-over and partitioning. In this talk you will learn what Cloudy Akka is, how it is implemented and how it can be used to solve hard scalability problems.

The Promising Future of Akka

Viktor Klang

In this talk we will explore Akkas Future-construct, and how it relates to Promises and Dataflow variables. We will talk about how one can use Akkas Futures to build completely non-blocking parallell computations, data transformation and map-reduce solutions using very simple building blocks, for powerful, elegant concurrency that scales from small to large.

TT Session 13: Web

Finagle: A Network Stack for the JVM

Marius A. Eriksen

We share our experience building and deploying Finagle, a library for building robust and highly performant asynchronous RPC servers and clients. Finagle is built on top of Netty and uses futures as a unifying abstraction in order to provide an intuitive and powerful API on top of asynchronous dispatching.

Finagle supports a variety of RPC styles, including request-response, streaming, and pipelining. It is protocol agnostic, and we have implemented codecs for the core protocols at Twitter.

We will talk about:

Slides | Video

Task-Driven Scala Web Applications

Timothy Perrett

Within this talk we will discuss patterns for building highly interactive, massively scalable web applications by leveraging some of the best projects available in the Scala community.

Recent years have seen a distinct shift in user behavior online: applications now have to deal with heavily write-orientated, event-driven architectures (E.D.A) and real-time user interfaces. Couple this with the operational challenges of scaling these types of applications and one can quickly find themselves having to cope with a lot of additional complexity.

With this in mind, this talk guides you through some of the paradigms associated with E.D.A and Task-based User Interfaces, whilst discussing how the Scala eco-system has evolved over the past years into a vibrant and intellectually rich place, which has yielded many excellent projects, such as Akka and Lift. Whilst these projects have remained largely independent and standalone, consider that utilizing parts of each can make building these highly event-driven, interactive applications far easier than it would otherwise be with more traditional software stacks.

Prototypical applications found in industry today are heavily orientated toward solving the relevant problem, with a UI that is designed for simply satisfying input to the domain objects; resulting in UIs that often leave the user with little specific information on what the primary intent of the display is. Task-based UIs however can greatly assist in these scenarios by placing the user experience in a central place during the inception of any given system. Task-based UI design is orientated toward capturing user intention and sending messages (or commands) back to the server, rather than mutated transformation objects (DTOs) that are simply persisted by the system, with little appreciation for what specifically changed or what the user was achieving.

This notion of sending messages has a strong synergy with actor based messaging, and specifically as the messages are being sent from the client-side UI, with responses being dynamically propagated back to that same client, that in turn is an excellent fit with Lift’s comet support, which is entirely based on actor message sending. More broadly, within this talk you will hear how you can neatly integrate a highly interactive, task-based user experience powered by Lift and propagate user events through entire software architectures, with a robust, distributed backend provided by Akka. Specifically, patterns of implementation such as CQRS and Event Sourcing have excellent synergy with the actor pattern and are superbly supported by Akka with its lightweight, fault-tolerant actors.

Building asynchronous E.D.As within Scala is a lot of fun, and this talk will give attendees a view into what is possible and aim to inspire them to implement task-based UIs and message-orientated systems with the awesome Scala eco-system of tools.

Play! + Scala: Adding more Fun to the equation

Sadek Drobi, Guillaume Bort

Play! framework is a simple lightweight web framework that has originally been designed for Java programming language on top of the JVM. Its uniqueness on the JVM is in how simple, productive and scalable it is yet being isomorphic to the HTTP protocol.
Play! Scala targets the Scala language keeping key properties of the framework. It uses a more functional and Scala idiomatic style of programming without giving up on simplicity and developer friendliness.

In this talk we will do a quick introduction into Play! Scala and highlight key components of the framework and main design techniques that will enable you being productive getting your scalable web application up and running.

Slides | Video

TT Session 15: Other

Porting my own programming language Onion's code from Java to Scala

Kota Mizushima

I would like to talk about experiences achieved by porting my own programming language Onion's code from Java to Scala. I started to develop my own programming language Onion from 2005. Onion's codes were originally written in Java and LoC was about 10000. Onion's code was ugly and its maintainability was bad because I was immature when I began to wrote Onion's code and Java is unfitted for compilers.
Now, I can use Scala and Scala is fitted for compilers. Then, I decided to port Onion's code from Java to Scala. Porting was not so difficult because IntelliJ IDEA's features helped the porting work. I would like to talk about how I ported Onion's codes from Java to Scala and problems encountered in the porting work. Although the porting work has not completely finished, many codes were already ported except parser's code (generated by JavaCC) and code generator's code (using Apache BCEL).

Onion is statically typed and object oriented programming language. Onion supports following features:

Object Scala Found - a JSR223-compliant version of the scala interpreter

Raphael Jolly

In this talk, we aim to describe the challenges of making Scala a JSR 223 compliant language, and present a solution in the form of a modified Scala interpreter. We identify three main reasons for why JSR 223 support does not yet exist:

  1. the lack of type information when passing objects past the Java/script boundary
  2. caching of precompiled scripts
  3. providing a class path to the Scala compiler.

We explain why we think issue (1) is not a problem and why existing solutions are imperfect (they require some enclosing script ceremony). Then we describe our solution to problem (2) through a guided tour of the needed source code additions. We focus on the last issue (3) as being the main impediment to the development of a real implementation. The solution presented is to provide the compiler with the appropriate list of available class files through JARs' manifest files.

Exploring light-weight event sourcing

Erik Rozendaal

Currently many business applications are developed using a very database centric approach, often requiring the use of complex and heavy-weight Object-Relational Mappers (ORMs) to make developers productive. Although improvements have been made (through the use of annotations, reflections, conventions) the core issues remain:

In this talk we'll explore the use of an alternative approach using the techniques pioneered by Domain-Driven Design (DDD) and especially Command-Query Responsibility Segregation (CQRS): Event Sourcing.
Using Event Sourcing the application can be split into two parts:

Through this explicit notion of change (domain events) the developer is put back in control of the application.

Traditional languages such as Java require a lot of ceremony when implementing event sourcing, obscuring the basic simplicity. Using Scala's flexible syntax and support for light-weight classes, immutable data structures and transactional memory only very little support code is needed to build production ready applications using Event Sourcing. We can start simple and scale up to more complexity only when needed. During this talk we'll take a quick tour through the code you need to get started.

The goals are to make developers productive while keeping applications understandable and maintainable. This is achieved by:

13:45 - 15:25

TT Session 10: Potpourri I

Lightweight effect types for Scala

Lukas Rytz

In addition to returning a result, methods in Scala can perform side-effects such as modifying state, throwing exceptions or performing I/O. These side-effects are an important part of a methods semantics, however they are not described in its signature.

Knowing the side-effects of methods is not only useful as a documentation for programmers, but it is becoming increasingly important for new tools and libraries. Examples are found concurrent programming, the new parallel collections, DSLs or transactional memory implementations: such libraries often assume purity or limited side-effects of certain parts of the code without being able to verify it.

We are working on an extension to Scala's type system for tracking and verifying different kinds of side-effects. The main goal is to implement a system that is powerful enough to give precise and useful information, while keeping the annotation overhead as small as possible. I will present the main ideas and demonstrate a prototype implementation for tracking potentially thrown exceptions (similar to `throws` in Java, but more polymorphic) and for verifying purity with respect to state modifications.

Scala and AspectJ: Approaching modularization of crosscutting functionalities

Ramnivas Laddad

Modularizing crosscutting functionalities such as caching, transaction management, security, and auditing is a difficult problem. When not dealt with correctly, they lead to duplicated, unmaintainable, and often plain-wrong implementation.

Aspect-oriented programming (AOP) allows modularizing such functionalities through aspects. Functional programming in Scala, too, offers a way to do the same through higher-order functions. Instead of mixing code from crosscutting functionalities with business logic, developer can use higher-order functions to separate them. In many cases, using higher-order functions can yield cleaner solution than equivalent AOP solution. In other cases, it is the opposite. In any case, there is a synergy between the two.

In this talk, we will examine common crosscutting concerns in enterprise applications. We will compare AOP implementation based on AspectJ and functional implementation based on Scala. The comparison is interesting in that both run on JVM and both are statically typed. Through examples, we will show how these two approaches fare and how to use them together beneficially.

Scala.NET: What you can do with it today

Miguel Garcia

After some time in the making, Scala.NET is now in the game to gain developer mindshare. This talk covers the value proposition of the compiler, progressively working our way from console applications to targeting .NET in its different flavors, in particular the Compact Framework for mobile development. This part of the talk also reviews current Visual Studio support (including debugging and “metadata as source”) as well as work in progress in this area.

Finally, we report our experience in automating the migration of Scala sources from JDK to .NET with the help of jdk2ikvm, a tool we developed to bootstrap the compiler on .NET, all while maintaining a single code base.

TT Session 12: Potpourri II

A typed, composable configuration system for sbt

Mark Harrah

sbt is a build tool written in Scala and configured in Scala. One goal of sbt is to provide a default build by convention, while being extensively configurable. Transitioning from convention to customization should be smooth, so that only the unique aspects of the build need to be defined. Towards this goal, sbt 0.9 introduces a new typed, composable configuration system.

Important elements of the new system include first class overriding and scoping of settings and delegating settings to other scopes. First class settings allow relationships between settings, such as building up paths or defining task inputs, to be declared once and used in different contexts with minimal effort. Scoping and delegation enable configuration at the granularity of the whole build, a project, a configuration, or a single task. The new task system integrates with this configuration system to uniformly define the execution and configuration graphs of a build.

Slides | Video

The ease of Scalaz

Heiko Seeberger

While looking at the implementation of Scalaz carries the risk of blowing up you brains, this library offers an abundance of low-hanging fruit. In this talk we will take a look at some of the most tasteful and really easy to use Scalaz features. For example we will see how to avoid the pitfall of Scala’s not-typesafe equals-operator, how to get rid of inscrutable validation logic or how to compose Akka actors and futures. Don’t be afraid: We will focus on using Scalaz, understanding the internals is not required.

Node.scala - Implementing Scalable Async IO using Delimited Continuations

Tiark Rompf

Asynchronous IO is an important ingredient for scalable software systems. In this talk we will take a look at the popular JavaScript-based Node.js framework and present a (minimalistic) port to idiomatic Scala. We will make heavy use of delimited continuations to remove inversion of control.

TT Session 14

Dependency Injection Strategies in Scala

Dick Wall

Dependency Injection is a lightweight strategy used extensively in Java enterprise environments, and with a number of implementations in the Java domain. These Java libraries work fairly well under Scala, and provide some nice features like flexible binding DSLs and good error reporting for missing or incorrect configuration, as well as the potential for run time re-wiring. They also bring some limitations with them, and tend to rely on annotations as the least intrusive way to perform much of the configuration.

Meanwhile in Scala, language features make for options like the cake pattern, which provide a much more integrated experience, better compile time checking, and a good deal more type safety in many uses. However, some of the nicer polish is missing from the Java dependency injection options, like centralized configuration module management and merging, along with an easy to use DSL that makes it easy for beginners to start using dependency injection without having to know about abstract fields and self types.

This talk will discuss some of the different options available to the Scala developer for dependency injection, and some of the possibilities that might come from a solution intended to weave the best features of the Java and Scala approaches into an easy to use, easy to configure library that makes the best possible use of Scala language features. It will also be the first public outing for an open source library that I am working on based on these ideas, and developed in my work for Locus Development, where we are using and testing it already.

Topics covered will include:

This talk will be for beginning/intermediate Scala developers.

Slides | Video | Subcut on GitHub

18 Months With Scala: Building a Driver for MongoDB

Brendan W. McAdams

A report of the lessons learned building "Casbah", an Open Source driver for MongoDB over the course of 18 months. When Casbah was started, the author had no Scala knowledge and used it as a learning basis. Casbah now includes a DSL for querying MongoDB and makes use of many Scala features including Type Classes.

This talk covers the experiences of learning Scala while building a tool and lessons learned through trial and error. A number of design options did and did not work including experiments with abstract types versus type parameters and fun with manifests.

Anorm: plain old SQL, Using Scala Collections, Pattern Matching and Parsers to simplify an unnecessarily over-complexified task

Sadek Drobi

Anorm is not an Object Relational Mapping. It is rather an SQL api for doing simpler JDBC. And since learning a new API is quite an investment, Anorm offers exclusively Scala preexisting interfaces for consuming an SQL query result. That includes Collections (Lists,Lazy Streams and Maps), Pattern Matching, and most interestingly, a Scala Parser Combinator API for constructing SQL parsers. This combination of interfaces yields a spectrum of usage that varies from transformation of ad-hoc queries, to reusable and composeable parsers for consuming sophisticated graphs.

Slides | Video

TT Session 16

Managing Binary Compatibility in Scala

Mirco Dotta

Binary compatibility is not a topic specific to the Scala language, but rather a concern for all languages targeting the JVM, Java included. Scala shares with Java many sources of potential binary incompatibilities, however, because of Scala greater expressiveness, Scala code has unique sources of incompatibility.

The Scala programming language offers several language constructs that do not have an equivalent in Java and are not natively supported by the JVM. Because of this, the Scala compiler (scalac) transforms these constructs into lower-lever, Java compatible, patterns that can be then easily translated into bytecode. Good examples of such high-level Scala constructs are traits, for mixin-based inheritance, and functions as first data citizens.

During this presentation we will review the main sources of binary incompatibility for the Scala language, providing you with useful insights about how you should evolve your codebase to avoid binary incompatibilities. Furthermore, we will show a tool, the Migration Manager, that can be used to automatically diagnose binary incompatibilities between two versions of a same library.

15:50 - 17:30

Panel

  • No labels