Program


On Day 1 of the conference, thanks to our awesome sponsor Dwango, all sessions (both room A and room B) will be streamed real-time on NicoNico Live.

Saturday September 6

Room A

Room B

9:00 - 9:55

Registration Open

9:55 - 10:15

S-1

Opening Remarks

Kota Mizushima, ScalaMatsuri Committee

-

10:15 - 10:55

S-2

Keynote Address

Martin Odersky

-

11:10 - 11:50

A-1

The Future of sbt(tentative)

Eugene Yokota, Typesafe

B-1

GitBucket: Perfect Github clone by Scala

Naoki Takezoe, BizReach, Inc.

11:55 - 12:35

A-2

Fifty Rapture One-Liners in Forty Minutes

Jon Pretty

B-2

Xitrum Web Framework Live Coding Demos

Takeharu.Oshida&Ngoc Dao, Mobilus Corporation

12:35 - 14:05

S-3

Lightning-Talk Session with Lunch

-

14:05 - 14:45

A-3

Play Framework(tentative)

Yevgeniy Brikman, LinkedIn

B-3

Introduction to SparkSQL and Catalyst

Takuya Ueshin, Nautilus Technologies, Inc.

14:50 - 15:30

A-4

Scalable Generator: Using Scala in SIer Business

Yugo Maede, TIS Inc.

B-4

Scala for Tidy Object-Oriented Programming

Kazuhiro Sera, M3, Inc.

15:45 - 16:25

A-5

Building a Unified "Big Data" Pipeline in Apache Spark

Aaron Davidson, Databricks

B-5

Scala Use Cases at Hatena

Takaya Tsujikawa, Hatena Co., Ltd.

16:25 - 17:05

S-4

Business Meeting presented by Typesafe, and Coffee Break

-

17:05 - 17:45

A-6

Getting started with Scalding, Storm and Summingbird

Yoshimasa Niwa

B-6

The Trial and Error in Releasing GREE Chat. GREE's First Scala Product.

Takayuki Hasegawa and Shun Ozaki, GREE, Inc.

18:00 - 18:40

A-7

Scarab: SAT-based Constraint Programming System in Scala

Takehide Soh, ISTC, Kobe University

B-7

Weaving Dataflows with Silk

Taro L. Saito, Treasure Data, Inc.

18:45 - 19:25

A-8

A National sport and Scala

Takuya Fujimura, Dwango Mobile Co., Ltd.

B-8

What's a macro?: Learning by Examples

Takako Shimamoto, BizReach, Inc

19:30 - 20:10

A-9

Scala for Reals in 100 msec Ad Processing (Scala vs Ruby)

todesking, Maverick., Inc.

-

20:15 - 22:15

S-5

After Party featuring Lightning-Talks.

-

Saturday September 6 / Sessions

A-4 Scalable Generator: Using Scala in SIer Business 会場A 14:50-15:30

We usually use Java, when we build web application.
We have been improving the efficiency of development in various ways so far, but we want to improve more.
I thought we needed to change our current style (=Java) and decided to engage with Scala for further evolution.

In order to solve problems when we using Scala, I have been developing a code generator for Play Framework and Slick.
It has the following features.


  • Support Play Framework and Slick (generates; Controller, View, Model and Routes)

  • Uses Slick Code Generator and follow with its policy

  • Import application structure into repository (database)

  • Features that are similar to Scaffold on Rails, and has more functionalities

  • Corresponds to Typesafe Activator



We plan to release this tool as an OSS.

TIS Inc. Yugo Maede

Yugo Maede

I work in the Strategy Technology Center of TIS Inc. (TIS is a system integrator in Japan). Our business covers broad spectrum in IT service field such as developing and operation mission-critical systems for the enterprise like bank, insurance, credit card, manufacture and so on. I have built in-house application framework and tools for development I have also taught these technologies to our engineers. Now, in order to spread Scala to them, I am evaluating the validity of Scala for our company.

A-5 Building a Unified "Big Data" Pipeline in Apache Spark 会場A 15:45-16:25

As big data becomes a concern for more and more organizations, there is a need for both faster tools to process it and easier-to-use APIs. Apache Spark is a cluster computing engine written in Scala that addresses these needs through (1) in-memory computing primitives that let it run 100x faster than Hadoop and (2) concise, high-level, functional APIs in Scala, Java, and Python.

In this talk, we’ll demo the ability of Spark to unify a range of data processing techniques live by building a machine learning pipeline with 3 stages: ingesting JSON data into a SQL table; training a k-means clustering model; and applying the model to a live stream of tweets. Typically this pipeline might require a separate processing framework for each stage, but we can leverage the versatility of the Spark runtime to combine Shark, MLlib, and Spark Streaming and do all of the data processing in a single, short program. This allows us to reuse code and memory between the components, improving both development time and runtime efficiency.

This talk will be a fully live demo and code walkthrough where we’ll build up the application throughout the session, explain the libraries used at each step, and finally classify raw tweets in real-time.

Databricks Aaron Davidson

Aaron Davidson

Aaron Davidson is an Apache Spark committer and software engineer at Databricks. He has implemented Spark standalone cluster fault tolerance and shuffle file consolidation, and has helped in the design, implementation, and testing of Spark's external sorting and driver fault tolerance. He is also a contributor to the Tachyon in-memory distributed file system and has co-authored work on Highly Available Transactions in the Berkeley AMP Lab.

A-7 Scarab: SAT-based Constraint Programming System in Scala 会場A 18:00-18:40

Since 2000, remarkable improvements have been made in the efficiency of solvers for propositional satisfiability testing (SAT). Such improvements of SAT solvers have enabled a programmer to develop SAT-based systems for planning, scheduling, and hardware/software verification. However, for a given problem, we usually need to develop a dedicated program that encodes it into SAT.

In this talk, we present Scarab, a SAT-based Constraint Programming System in Scala. The major design principle of Scarab is to provide an expressive, efficient, customizable, and portable workbench for SAT-based system developers. It provides a rich constraint modeling language on Scala and enables a programmer to rapidly specify problems and to experiment with different modelings. Scarab also provides a simple way to realize incremental solving, solution enumeration, native constraints, and dynamic addition and/or removal of constraints.

Scarab is implemented in Scala and consists of Constraint Programming Domain-Specific Language (DSL), SAT encoding module, and interface to the back-end SAT solvers. The current version of Scarab adopts Sat4j as a back-end SAT solver. The combination of Scarab and Sat4j makes it possible to develop portable SAT-based systems that run on any platform supporting Java.

The source code and information of Scarab is available in

http://kix.istc.kobe-u.ac.jp/~soh/scarab/.

ISTC, Kobe University Takehide Soh

Takehide Soh

Takehide Soh received Ms. of Engineering at Kobe University in 2006. After 2 years experience in Suntory Co., Ltd., he studied in the Graduate University for Advanced Studies (SOKENDAI) and got PhD. (Informatics) in 2011. Currently, he is working in Information Science and Technology Center of Kobe University as Assistant Professor. His research interests are SAT technology, Constraint Programming and their Applications.

B-1 GitBucket: Perfect Github clone by Scala 会場B 11:10-11:50

GitBucket is a Github clone using Scala. The most important feature of GitBucket is "Easy Installation". It requires only JavaVM. You can start to use GitBucket only one command and it also provides SSH access. In this session, I want to explain core features of GitBucket, what technologies are used in GitBucket and future roadmap.

BizReach, Inc. Naoki Takezoe

Naoki Takezoe

I'm a Scala programmer at BizReach, Inc who is working for the new service using Scala. I'm author of GitBucket which is the perfect Github clone by Scala and one of committers of Scalatra which is the simple and powerful web framework in Scala.

B-2 Xitrum Web Framework Live Coding Demos 会場B 11:55-12:35

Xitrum project was started by Ngoc Dao in 2010.

Xitrum 2.x was introduced at the Lightning Talk at Scala Conference in Japan 2013
http://www.slideshare.net/ngocdaothanh/xitrum-scalaconfjp2013. This year,
we would like to introduce Xitrum 3.x with its new features, in the live coding
style:

Core features: auto route collecting, WebSocket, SockJS, CORS, i18n etc.


  • Features for rapid development: class and route autoreloading, Swagger doc and client side codegen etc.

  • Features for easier server side operation: Scalive, Metrics etc.

  • Features for scaling out to multiple servers: clustering with Akka and Hazelcast

Mobilus Corporation Takeharu.Oshida&Ngoc Dao

Takeharu.Oshida&Ngoc Dao Takeharu.Oshida&Ngoc Dao

Oshida and Ngoc are working at Mobilus, a venture that provides solutions for mobiles. At Mobilus, Xitrum is being used in projects like realtime chat solutions, for many game companies, telecom companies, education companies in Japan. Outside Mobilus, Xitrum is also being used at companies in Korea and Russia.

B-3 Introduction to SparkSQL and Catalyst 会場B 14:05-14:45

Apache Spark attracts a lot of attention as a faster distributed processing engine than Hadoop MapReduce, written in Scala. I will introduce SparkSQL, one of Apache Spark components, and Catalyst used in SparkSQL.

SparkSQL is a project to execute SQL on Apache Spark, develped by mainly Databricks, Inc. It parses SQL and builds logical execution plan, physical execution plan, and then converts it to RDDs(Resilient Distributed Datasets). The logical execution plan is optimized by rule-based approach. These planning or optimization framework is provided by Catalyst.

Nautilus Technologies, Inc. Takuya Ueshin

Takuya Ueshin

A programmer working at Nautilus Technologies, Inc. A Spark contributor.

B-4 Scala for Tidy Object-Oriented Programming 会場B 14:50-15:30

Nowadays, Scala gets attention as a stable platform for asynchronous event-driven architectures or a pragmatic functional programming language. And I think not a few other people expect Scala better, refined and functional style flavored object-oriented programming. Some people may call it 'better Java' or something like that.

I'm a main developer of ScalikeJDBC and Skinny Framework. While developing these libraries, I has been seeking what the best realistic balance of immutable data structure and object-oriented programming is. And I found that of course the quality of our libraries' API design is important, but how to create a consensus on coding style among team members that use our libraries is much more important.

In this talk, I'll show you useful libraries and tips to practice friendly and solid coding style for programmers that are used to object-oriented programming.

M3, Inc. Kazuhiro Sera

Kazuhiro Sera

A Scala enthusiast in Japan. ScalikeJDBC, Skinny Framework project lead. A web developer at M3, Inc.

B-5 Scala Use Cases at Hatena 会場B 15:45-16:25

Mackerel, an application performance management service, provided by Hatena has adopted Scala and Play2 in server-side. I will introduce why did we choose Scala although we have used Perl over 10 years, how it effected the development flow and the product, and current development and operations of Mackerel.

Hatena Co., Ltd. Takaya Tsujikawa

Takaya Tsujikawa

Software engineer at Hatena, lead developer of Mackerel team.

B-6 The Trial and Error in Releasing GREE Chat. GREE's First Scala Product. 会場B 17:05-17:45

We released a chat service named GREE Chat in June 2014.
This presentation is about our approach to solving some problems we have encountered when developing the backend system of our chat service.

Topics:


  • Problems we encountered when introducing Scala into our system for the first time

  • Techniques for maintaining real-time with hundreds of thousands of users

  • Asynchronous processing, parallel and concurrency processing using Finagle and Akka

GREE, Inc. Takayuki Hasegawa and Shun Ozaki

Takayuki Hasegawa and Shun Ozaki Takayuki Hasegawa and Shun Ozaki

Joined GREE, Inc. in 2013 as a new graduate. Currently part of the the GREE Chat project as an engineer.

B-7 Weaving Dataflows with Silk 会場B 18:00-18:40

Silk is a framework for building dataflows in Scala. In Silk users write data processing code with collection operators (e.g., map, filter, reduce, join, etc.). Silk uses Scala Macros to construct a DAG of dataflows, nodes of which are annotated with variable names in the program. By using these variable names as markers in the DAG, Silk can support interruption and resume of dataflows and querying the intermediate data. By separating dataflow descriptions from its computation, Silk enables us to switch executors, called weavers, for in-memory or cluster computing without modifying the code. In this talk, we will show how Silk helps you run data-processing pipelines as you write the code.

Treasure Data, Inc. Taro L. Saito

Taro L. Saito

Taro L. Saito is a software engineer at Treasure Data, Inc. He received a Ph.D. of computer science at the University of Tokyo. Before joining Treasure Data, he had been working on genome sciences, database management systems and distributed computing as an assistant professor of the University of Tokyo.

B-8 What's a macro?: Learning by Examples 会場B 18:45-19:25

Since 2.10.0 Scala includes macros.
In this session, I think I would like to talk about something like the
following:


  • What do macros benefit to us?

  • What can we do with macros?

  • Actual use cases

BizReach, Inc Takako Shimamoto

Takako Shimamoto

A Scala programmer at BizReach, Inc. And a GitBucket committer.


Sunday September 7

On Day 2 we will host Japan's first ever Scala unconference. An unconference is a conference in which you, the attendees, make the rules! You decide what you want to discuss/learn about/hack on. We've never done this before so we're not sure exactly how it's going to turn out, but it should be a lot of fun! For more details, check out the unconference page.

The unconference will run from 10am to 5pm. (Venue opens at 9am.) We will provide breakfast and lunch.


© 2012 - ScalaMatsuri Committee