Operations


Query API


ScalikeJDBC provides a variety of query APIs including single, first, list, and foreach. Each of these functions will execute the java.sql.PreparedStatement#executeQuery().


Single / Optional Result for Query


The single API returns a single matched row as an Option. If multiple rows are returned unexpectedly, a runtime exception is thrown.

import scalikejdbc._

val id = 123

// simple example
val name: Option[String] = DB readOnly { implicit session =>
  sql"select name from emp where id = ${id}".map(rs => rs.string("name")).single.apply()
}

// defined mapper as a function
val nameOnly = (rs: WrappedResultSet) => rs.string("name")
val name: Option[String] = DB readOnly { implicit session =>
  sql"select name from emp where id = ${id}".map(nameOnly).single.apply()
}

// define a class to map the result
case class Emp(id: String, name: String)
val emp: Option[Emp] = DB readOnly { implicit session =>
  sql"select id, name from emp where id = ${id}"
    .map(rs => Emp(rs.string("id"), rs.string("name"))).single.apply()
}

// QueryDSL
object Emp extends SQLSyntaxSupport[Emp] {
  def apply(e: ResultName[Emp])(rs: WrappedResultSet): Emp = new Emp(id = rs.get(e.id), name = rs.get(e.name))
}
val e = Emp.syntax("e")
val emp: Option[Emp] = DB readOnly { implicit session =>
  withSQL { select.from(Emp as e).where.eq(e.id, id) }.map(Emp(e.resultName)).single.apply()
}

You can learn about QueryDSL in detail here:

/documentation/query-dsl


First Result from Multiple Results


The first API retrieves the first row from the matched rows and returns it as an Option.

val name: Option[String] = DB readOnly { implicit session =>
  sql"select name from emp".map(rs => rs.string("name")).first.apply()
}

val e = Emp.syntax("e")
val name: Option[String] = DB readOnly { implicit session =>
  withSQL { select(e.result.name).from(Emp as e) }.map(_.string(e.name)).first.apply()
}

List Results


The list API returns matched multiple rows as a scala.collection.immutable.List.

val name: List[String] = DB readOnly { implicit session =>
  sql"select name from emp".map(rs => rs.string("name")).list.apply()
}

val e = Emp.syntax("e")
val name: List[String] = DB readOnly { implicit session =>
  withSQL { select(e.result.name).from(Emp as e) }.map(_.string(e.name)).list.apply()
}

Foreach Operation


The foreach API allows you to perform side effects during iterations. This method is particularly useful for handling large ResultSet streaming data efficiently.

DB readOnly { implicit session =>
  sql"select name from emp".foreach { rs =>
    out.write(rs.string("name"))
  }
}

val e = Emp.syntax("e")
DB readOnly { implicit session =>
  withSQL { select(e.name).from(Emp as e) }.foreach { rs =>
    out.write(rs.string(e.name))
  }
}

Setting JDBC fetchSize


In cases like the PostgreSQL JDBC driver, setting the fetchSize to 0 (which is the default) can lead to infinite caching of result sets and subsequently cause memory issues. It’s important to manage the fetchSize appropriately to prevent such problems.

https://docs.oracle.com/javase/8/docs/api/java/sql/Statement.html#setFetchSize-int-

From version 2.0.5 onwards, ScalikeJDBC allows you to specify the JDBC fetchSize to better control memory usage and enhance performance:

val e = Emp.syntax("e")
DB readOnly { implicit session =>
  sql"select name from emp"
    .fetchSize(1000)
    .foreach { rs => out.write(rs.string("name")) }
}

Alternatively, you can set the fetchSize directly on the scalikejdbc.DBSession to manage data retrieval more efficiently. This approach is also supported in ScalikeJDBC from version 2.0.5 onwards:

val (e, c) = (Emp.syntax("e"), Cmp.syntax("c"))

DB readOnly { implicit session =>
  session.fetchSize(1000)

  withSQL { select(e.name).from(Emp as e) }.foreach { rs =>
    out.write(rs.string(e.name)
  }
  withSQL { select(c.name).from(Cmp as c) }.foreach { rs =>
    out.write(rs.string(c.name))
  }
}

Implementing Custom Extractor


Implementing a custom extractor can be essential when you need more control over how data is retrieved from a result set, such as preserving null values. Here’s an example of how you can create a custom extractor in ScalikeJDBC, useful for testing queries and ensuring null values are handled appropriately:

def toMap(rs: WrappedResultSet): Map[String, Any] =  {
  (1 to rs.metaData.getColumnCount).foldLeft(Map[String, Any]()) { (result, i) =>
    val label = rs.metaData.getColumnLabel(i)
    Some(rs.any(label)).map { nullableValue => result + (label -> nullableValue) }.getOrElse(result)
  }
}

sql"select * from emp".map(rs => toMap(rs)).single.apply()

This code snippet demonstrates how to use a custom extractor to fetch data from a database while preserving null values as Option types in Scala, enhancing the robustness of your data handling logic.


Using ParameterBinder


Using ParameterBinder in ScalikeJDBC lets you customize the binding of parameters to a PreparedStatement. This is particularly useful for handling complex data types such as binary streams. Here’s how you can bind an InputStream to a PreparedStatement using #setBinaryStream():

sql"create table blob_example (id bigint, data blob)").execute.apply()

val bytes = scala.Array[Byte](1, 2, 3, 4, 5, 6, 7)
val in = new ByteArrayInputStream(bytes)
val bytesBinder = ParameterBinder(
  value = in,
  binder = (stmt: PreparedStatement, idx: Int) => stmt.setBinaryStream(idx, in, bytes.length)
)

sql"insert into blob_example (data) values (${bytesBinder})").update.apply()

To enable a ParameterBinder implicitly across your Scala code, you must define a ParameterBinderFactory. This factory will automatically generate ParameterBinder instances for specific types as needed. Here’s an example of how to set up a ParameterBinderFactory for a MemberId type, which is useful for binding binary data:

case class MemberId(value: Int)
case class Member(id: MemberId, name: String)
object Member extends SQLSyntaxSupport[Member] {}

implicit val memberIdPbf = ParameterBinderFactory[MemberId] {
  value => (stmt, idx) => stmt.setInt(idx, value.value)
}

val m = Member.syntax("m")
val memberId = MemberId(123)
withSQL {
  select.from(Member as m).where.eq(m.id, memberId) // `memberIdPbf` properly affects here
}.map(Member(_)).single.apply()

// CAUTION:
// The SQL Interpolation style doesn't work with ParameterBinderFactory while it works with ParameterBinder
sql"select id from member where id = $memberId" // will bind `memberId` as an object

Using TypeBinder


To extract values of a specific type from a ResultSet, you’ll need to implement a TypeBinder type class for that type. This allows you to define custom extraction logic for handling database values as custom types. For example, if you have a MemberId type that wraps a Long value, you can create a TypeBinder for it like this:

import scalikejdbc._

implicit val session = AutoSession

// prepare data
sql"create table member (id bigint not null primary key, name varchar(100))".execute.apply()
sql"insert into member values (1, 'Alice')".update.apply()
sql"insert into member values (2, 'Bob')  ".update.apply()
sql"insert into member values (3, 'Chris')".update.apply()

case class MemberId(id: Long)
case class Member(id: MemberId, name: String)

// compilation error here
val ids: Seq[MemberId] = sql"select * from member".map(_.get[MemberId]("id")).list.apply()

// <console>:24: error: could not find implicit value for evidence parameter of type scalikejdbc.TypeBinder[MemberId]
//         val ids: Seq[MemberId] = sql"select * from member".map(_.get[MemberId]("id")).list.apply()
//                                                                               ^

Since your MemberId type is not available by default, you’ve got a compilation error. You can see all predef types by default here.

As such, you need to define your own TypeBinder for MemberId type as below.

// define TypeBinder[MemberId]
implicit val memberIdTypeBinder: TypeBinder[MemberId] = new TypeBinder[MemberId] {
  def apply(rs: ResultSet, label: String): MemberId = MemberId(rs.getLong(label))
  def apply(rs: ResultSet, index: Int): MemberId = MemberId(rs.getLong(index))
}

// WrappedResultSet#get[MemberId]("id") works with above implicit definition
val ids: Seq[MemberId] = sql"select id from member".map(_.get[MemberId]("id")).list.apply()
// ids: Seq[MemberId] = List(MemberId(1), MemberId(1), MemberId(2), MemberId(3))

Certainly, you can handle custom types like MemberId without explicitly defining a TypeBinder by manually extracting and converting the data within your database queries. Here’s how you might approach this:

val ids: Seq[MemberId] = sql"select id from member".map(rs => MemberId(rs.get("id"))).list.apply()

Binders to define both ParameterBinderFactory and TypeBinder


Using Binders is highly convenient when you need to define both a ParameterBinderFactory and a TypeBinder for a custom type. This approach allows for consistent and streamlined handling of how a type is both sent to and retrieved from the database, ensuring that your custom types are seamlessly integrated into your database operations.

// object Binders {
//   def of[A](f: Any => A)(g: A => (PreparedStatement, Int) => Unit): Binders[A]
// }

import scalikejdbc._

case class MemberId(value: Int)

// If you paste this code on the REPL, use :paste
{
  Binders.of[MemberId]
  // For TypeBinder (f: Any => A)
  {
    case null => null
    case v: Float => MemberId(v.toInt)
    case v: Double => MemberId(v.toInt)
    case n: Number => MemberId(n.intValue)
    case v => MemberId(v.toString.toInt)
  }
  // For ParameterBinderFactory (A => (PreparedStatement, Int) => Unit)
  { v =>
    (ps, idx) =>
    ps.setInt(idx, v.value)
  }
}

Using joda-time library


When working with the joda-time library, it’s necessary to include the appropriate optional module to ensure full compatibility and functionality. Here’s how you can set up and use the joda-time support module within your Scala project:

libraryDependencies += "org.scalikejdbc" %% "scalikejdbc-joda-time" % "4.3.0"

After importing, you can use the pre-defined ParameterBinderFactory and TypeBinder instances to handle joda-time specific types:

import scalikejdbc._
// If you need ParameterBinderFactory for joda-time classes
import scalikejdbc.jodatime.JodaParameterBinderFactory._
// If you need TypeBinder for joda-time classes
import scalikejdbc.jodatime.JodaTypeBinder._
import java.time._

case class Group(
  id: Long,
  name: Option[String],
  createdAt: ZonedDateTime)

object Group extends SQLSyntaxSupport[Group] {
  def apply(g: SyntaxProvider[Group])(rs: WrappedResultSet): Group = apply(g.resultName)(rs)
  def apply(g: ResultName[Group])(rs: WrappedResultSet): Group = new Group(
    id = rs.get(g.id),
    name = rs.get(g.name),
    createdAt = rs.get(g.createdAt)
    // (or) createdAt = rs.get[DateTime](g.createdAt)
    // (or) createdAt = rs.jodaDateTime(g.createdAt)
  )
}

Update API


The update API executes java.sql.PreparedStatement#executeUpdate().

import scalikejdbc._

DB localTx { implicit session =>
  sql"""insert into emp (id, name, created_at) values (${id}, ${name}, current_timestamp)"""
    .update.apply()
  val newId = sql"insert into emp (name, created_at) values (${name}, current_timestamp)"
    .updateAndReturnGeneratedKey.apply()
  sql"update emp set name = ${newName} where id = ${newId}".update.apply()
  sql"delete from emp where id = ${newId}".update.apply()
}

val column = Emp.column
DB localTx { implicit s =>
  withSQL {
    insert.into(Emp).namedValues(
      column.id -> id,
      column.name -> name,
      column.createdAt -> sqls.currentTimestamp)
   }.update.apply()

  val newId: Long = withSQL {
    insert.into(Emp).namedValues(column.name -> name, column.createdAt -> sqls.currentTimestamp)
  }.updateAndReturnGeneratedKey.apply()

  withSQL { update(Emp).set(column.name -> newName).where.eq(column.id, newId) }.update.apply()
  withSQL { delete.from(Emp).where.eq(column.id, newId) }.update.apply()
}


Execute API


The execute API executes java.sql.PreparedStatement#execute().

DB autoCommit { implicit session =>
  sql"create table emp (id integer primary key, name varchar(30))".execute.apply()
}

// QueryDSL doesn't support DDL yet.

Batch API


The batch and batchByName executes java.sql.PreparedStatement#executeBatch().

import scalikejdbc._

DB localTx { implicit session =>
  val batchParams: Seq[Seq[Any]] = (2001 to 3000).map(i => Seq(i, "name" + i))
  sql"insert into emp (id, name) values (?, ?)".batch(batchParams: _*).apply()
}

DB localTx { implicit session =>
  sql"insert into emp (id, name) values ({id}, {name})"
    .batchByName(Seq(Seq("id" -> 1, "name" -> "Alice"), Seq("id" -> 2, "name" -> "Bob")):_*)
    .apply()
}

val column = Emp.column
DB localTx { implicit session =>
  val batchParams: Seq[Seq[Any]] = (2001 to 3000).map(i => Seq(i, "name" + i))
  withSQL {
    insert.into(Emp).namedValues(column.id -> sqls.?, column.name -> sqls.?)
  }.batch(batchParams: _*).apply()
}

If this webpage has a typo or something wrong, Please report or fix it. How?