Porcupine Programmer

Programming rants, random stuff and some more programming.

Mobilization 2013 and Android Tech Talks Meetup

| Comments

I’ll give the presentation on this years Mobiliztion conference in Łódź on October 26th:

I’ll talk about challenges related to ContentProvider and data model in general we faced during 2 years of development of Base CRM for Android. Even if this particular topic does not concern you, the agenda is ripe with other interesting Android topics: dependency injection with Dagger, Gradle, unit testing, continuous integration. It’s not Android specific event – there are also several presentations about other mobile platforms.

If you already have other plans for October 26th, you want to share some war stories related to data model on Android or you just want to talk about Android with fellow geeks, I recommend you a MeetUp happening next week in Kraków: Android Tech Talks #3. I’ll give a short topic intro, which (I hope) will be followed by deep, insightful discussion.

Guava Goodies

| Comments

This is a long overdue post after my Guava on Android post from February. Since then I’ve been using Guava in pretty much every Java project I was involved in and I still find new stuff that makes my code both shorter and clearer. Some random examples:

Objects.equal()

1
2
3
4
5
6
7
// instead of:
boolean equal = one == null
    ? other == null
    : one.equals(other);

// Guava style:
boolean equal = Objects.equal(one, other);

Objects.hashcode()

1
2
3
4
5
6
7
8
9
10
11
12
13
14
// instead of:
@Override
public int hashCode() {
  int result = x;
  result = 31 * result + (y != null ? Arrays.hashCode(y) : 0);
  result = 31 * result + (z != null ? z.hashCode() : 0);
  return result;
}

// Guava style:
@Override
public int hashCode() {
  return Objects.hashCode(x, y, z);
}

Joiner

1
2
3
4
5
6
7
8
9
10
11
12
// instead of:
StringBuilder b = new StringBuilder();
for (int i = 0; i != a.length; ++i) {
  b.append(a[i]);
  if (i != a.length - 1) {
    b.append(", ");
  }
}
return b.toString();

// Guava style:
Joiner.on(", ").join(a);

ComparisonChain

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
// instead of:
@Override
public int compareTo(Person other) {
  int cmp = lastName.compareTo(other.lastName);
  if (cmp != 0) {
    return cmp;
  }
  cmp = firstName.compareTo(other.firstName);
  if (cmp != 0) {
    return cmp;
  }
  return Integer.compare(zipCode, other.zipCode);
}

// Guava style:
@Override
public int compareTo(Person other) {
  return ComparisonChain.start()
      .compare(lastName, other.lastName)
      .compare(firstName, other.firstName)
      .compare(zipCode, other.zipCode)
      .result();
}

Lists, Maps and Sets classes contain bunch of newFooCollection, which effectively replace the diamond operator from JDK7, but also allow you to initialize the collection from varargs.

Sets also contain the difference, intersection, etc. methods for common operations on sets, which a) have sane names, unlike some stuff from JDK’s Collections, and b) doesn’t change operands, so you don’t have to make a defensive copy if you want to use the same set in two operations.

Speaking of defensive copying: Guava has a set of Immutable collections, which were created just for this purpose. There are few other very useful collections: LoadingCache, which you can think of as a lazy map with specified generator for new items; Multiset, very handy if you need to build something like a histogram; Table if you need to lookup value using two keys.

The other stuff I use very often are Preconditions. It’s just a syntactic sugar for some sanity checks in your code, but it makes them more obvious, especially when you skim through some unfamiliar code. Bonus points: if you don’t use the return values from checkNotNull and checkPositionIndex, you can remove those checks from performance critical sections using Proguard.

On Android you have the Log.getStackTraceString() helper method, but in plain Java you’d have to build one from Throwable.getStackTrace(). Only you don’t have to do this, since Guava have Throwables.getStackTraceAsString() utility method.

Guava introduces also some functional idioms in form of Collections2.transform and Collections2.filter, but I have mixed feelings about them. On one hand sometimes they are life savers, but usually they make the code much uglier than the good ol’ imperative for loop, so ues them with caution. They get especially ugly when you need to chain multiple transformations and filters, but for this case the Guava provides the FluentIterable interface.

None of the APIs listed above is absolutely necessary, but seriously, you want to use Guava (but sometimes not the latest version). Each part of it raises the abstraction level of your code a tiny bit, improving it one line at the time.

Forger Library

| Comments

Sometimes the code you write is hard to test, and the most likely reason for this is that you wrote a shitty code. Other times, the code is quite easy to test, but setting up the test fixture is extremely tedious. It may also mean that you wrote a shitty code, but it may also mean that you don’t have the right tools.

For me the most painful part of writing tests was filling the data model with some fake data. The most straightforward thing to do is to write helper methods for creating this data, but this means you’ll have two pieces of code to maintain side by side: the data model and the helper methods. The problem gets even more complicated when you need to create a whole hierarchy of objects.

The first step is generating a valid ContentValues for your data model. You need to know the column names and the type of the data that should be generated for a given column. Note that for column data type you cannot really use the database table definitions – for example sqlite doesn’t have boolean column type, so you’d define your column as integer, but the valid values for this column are only 1 and 0.

This is not enough though, because you’d generate random values for the foreign keys, which might crash the app (if you enforce the foreign key constraints) or break some other invariants in your code. You might work around this by creating the objects in the right order and overriding generated data for foreign key columns, but this would be tedious and error prone solution.

I have recently posted about my two side-projects: MicroOrm and Thneed. The former let’s you annotate fields in POJO and handles the conversion from POJO to ContentValues and from Cursor to POJO:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
public class Customer {
  @Column("id")
  public long id;

  @Column("name")
  public String name;
}

public class Order {
  @Column("id")
  public long id;

  @Column("amount")
  public int amount;

  @Column("customer_id")
  public long customerId;
}

The latter allows you to define the relationships between entities in your data model:

1
2
3
4
5
ModelGraph<ModelInterface> modelGraph = ModelGraph.of(ModelInterface.class)
    .identifiedByDefault().by("id")
    .where()
    .the(ORDER).references(CUSTOMER).by("customer_id")
    .build();

See what I’m getting at?

The returned ModelGraph object is a data structure that can be processed by independently written processors, i.e. they are the Visitable and Visitor parts of the visitor design pattern. The entities in relationship definitions are not a plain marker Objects – the first builder call specifies the interface they have to implement. This interface can be used by Visitors to get useful information about the connected models and, as a type parameter of ModelGraph, ensures that you are using the correct Visitors for your ModelGraph. See my last post about Visitors for more information about generifying the visitor pattern.

In our case the interface should declare which POJO contains MicroOrm annotations and where should the generated ContentValues be inserted:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
public interface MicroOrmModel {
  public Class<?> getModelClass();
}

public interface ContentResolverModel {
  public Uri getUri();
}

interface ModelInterface extends ContentResolverModel, MicroOrmModel {
}

public static final ModelInterface CUSTOMER = new ModelInterface() {
  @Override
  public Uri getUri() {
    return Customers.CONTENT_URI;
  }

  @Override
  public Class<?> getModelClass() {
    return Customer.class;
  }
}

The final step is to wrap everything in fluent API:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
Forger<ModelInterface> forger = new Forger(modelGraph, new MicroOrm());
Order order = forger.iNeed(Order.class).in(contentResolver);

// note: we didn't created the Customer dependency of Order, but:
assertThat(order.customer_id).isNotEqualTo(0);

// of course we can create Customer first and then create Order for it:
Customer customer = forger.iNeed(Customer.class).in(contentResolver);
Order anotherOrder = forger.iNeed(Order.class).relatedTo(customer).in(contentResolver);

assertThat(anotherOrder.customer_id).isEqualTo(customer.id);

// or if we need multiple orders for the same customer:
Customer anotherCustomer = forger.iNeed(Customer.class).in(contentResolver);
Forger<ModelInterface> forgerWithContext = forger.inContextOf(anotherCustomer);

Order orderA = forgerWithContext.iNeed(Order.class).in(contentResolver);
Order orderB = forgerWithContext.iNeed(Order.class).in(contentResolver);

assertThat(orderA.customer_id).isEqualTo(anotherCustomer.id);
assertThat(orderB.customer_id).isEqualTo(anotherCustomer.id);

The most pathological case in our code base was a test with 10 lines of code calling over 100 lines of helper methods and 6 lines of the actual test logic. The Forger library allowed us to get rid of all the helper methods and reduce the 10 lines of setup to 1 fluent API call (it’s quite a long call split into few lines, but it’s much prettier than the original code).

Check out the code on github and don’t forget to star this project if you find it interesting.

The funny thing about this project is that it’s a byproduct of Thneed, which I originally wrote to solve another problem. It makes me think that the whole idea of defining the relationships as a visitable structure is more flexible than I originally anticipated and it might become the cornerstone of the whole set of useful tools.

Random Musings on Visitor Design Pattern in Java

| Comments

First let’s have a quick refresher on what is the visitor design pattern. This pattern consists of two elements: the Visitor, which in the Gang of Four book is defined as an “operation to be performed on the elements of an object structure”; the second element is the structure itself.

1
2
3
4
5
6
public interface Visitable {
  void accept(Visitor visitor);
}

public interface Visitor {
}

The Visitor interface is for now empty, because we haven’t declared any Visitable types. In every class implementing Visitable interface we’ll call a different method in Visitor:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
public interface Visitable {
  void accept(Visitor visitor);
}

public static class VisitableFoo implements Visitable {
  @Override
  public void accept(Visitor visitor) {
    visitor.visit(this);
  }
}

public static class VisitableBar implements Visitable {
  @Override
  public void accept(Visitor visitor) {
    visitor.visit(this);
  }
}

public interface Visitor {
  void visit(VisitableBar visitableBar);
  void visit(VisitableFoo visitableFoo);
}

Sounds like a lot of work, but there is a reason for it. You could achieve something similar by simply adding another method to the Visitable pattern, but this means you’d have to be able to modify the Visitable classes. The visit/accept double dispatch allows you to write a library like Thneed, which defines the data structure, but leaves the operations implementation to the library users.

The classic visitor pattern requires you to keep some kind of state and to write a method for getting and clearing this state. This might be what you want, but if you want to simply process your Visitable objects one by one and return independent computations, you might want just to return a value from visit() method. So the first twist you can add to classic Visitor pattern is to return a value from visit/accept method:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
public interface Visitable {
  <TReturn> TReturn accept(Visitor<TReturn> visitor);
}

public static class VisitableFoo implements Visitable {
  @Override
  public <TReturn> TReturn accept(Visitor<TReturn> visitor) {
    return visitor.visit(this);
  }
}

public static class VisitableBar implements Visitable {
  @Override
  public <TReturn> TReturn accept(Visitor<TReturn> visitor) {
    return visitor.visit(this);
  }
}

public interface Visitor<TReturn> {
  TReturn visit(VisitableBar visitableBar);
  TReturn visit(VisitableFoo visitableFoo);
}

Note that only Visitor interface is parametrized with a return type. The only thing that Visitable.accept() do is dispatching the call to Visitor, so there is no point in generifying the whole interface, it’s sufficient to make an accept method generic. In fact, making the TReturn type a Visitable interface type would be a design mistake, because you wouldn’t be able to create a Visitable that could be accepted by Visitors with different return types:

1
public interface Visitable<TReturn> {<br/>  TReturn accept(Visitor<TReturn> visitor);<br/>}

Because of type erasure you wouldn’t be able to create a Visitable that can accept two Visitors returning a different types:

1
2
3
public static class MyVisitable implements Visitable<String>, Visitable<Integer> {
  // Invalid! "Duplicate class Visitable" compilation error.
}

Another thing you can do is generifying the whole pattern. The use case for this is when your Visitables are some kind of containers or wrappers over objects (again, see the Thneed library, where the Visitables subclasses are the different kinds of relationships between data models and are parametrized with the type representing the data models). The naive way to do this is just adding the type parameters:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
public interface Visitable<T> {
  void accept(Visitor<T> visitor);
}

public static class VisitableFoo<T> implements Visitable<T> {
  @Override
  public void accept(Visitor<T> visitor) {
    visitor.visit(this);
  }
}

public static class VisitableBar<T> implements Visitable<T> {
  @Override
  public void accept(Visitor<T> visitor) {
    visitor.visit(this);
  }
}

public interface Visitor<T> {
  void visit(VisitableBar<T> visitableBar);
  void visit(VisitableFoo<T> visitableFoo);
}

There is a problem with the signatures of those interfaces. Let’s say that we want our Visitor to operate on Visitables containing Numbers:

1
2
3
4
5
6
7
8
9
Visitor<Number> visitor = new Visitor<Number>() {
  @Override
  public void visit(VisitableBar<Number> visitableBar) {
  }

  @Override
  public void visit(VisitableFoo<Number> visitableFoo) {
  }
};

You should think about Visitor as the method accepting the Visitable. If our Visitor can handle something that contains Number, it should also handle something that contain any Number subclass – it’s a classic example of “consumer extends, producer super” behaviour or covariance and contravariance) in general. In the implementation above however, the strict generics types are causing compilation errors. Generics wildcards to the rescue:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
public interface Visitable<T> {
  void accept(Visitor<? super T> visitor);
}

public static class VisitableFoo<T> implements Visitable<T> {
  @Override
  public void accept(Visitor<? super T> visitor) {
    visitor.visit(this);
  }
}

public static class VisitableBar<T> implements Visitable<T> {
  @Override
  public void accept(Visitor<? super T> visitor) {
    visitor.visit(this);
  }
}

public interface Visitor<T> {
  void visit(VisitableBar<? extends T> visitableBar);
  void visit(VisitableFoo<? extends T> visitableFoo);
}

Note that the change has to be symmetric, i.e. both the accept() and visit() signatures have to include the bounds. Now we can safely call:

1
2
3
4
5
6
VisitableBar<Integer> visitableBar = new VisitableBar<Integer>();
Visitor<Number> visitor = new Visitor<Number>() {
  // visit() implementations
}

visitableBar.accept(visitor);

Proguard Gotcha

| Comments

A while ago I wrote about removing the logs from release builds using Proguard. As usual, I’ve found a gotcha that might cost you a couple hours of head scratching.

Let’s say that we have a code like this somewhere:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
package com.porcupineprogrammer.proguardgotcha;

import android.app.Activity;
import android.os.Bundle;
import android.util.Log;

public class MainActivity extends Activity {
  static final String TAG = "ProguardGotcha";

  @Override
  protected void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);

    Log.d(TAG, doNotRunOnProduction());
  }

  private String doNotRunOnProduction() {
    Log.e(TAG, "FIRE ZE MISSILES!");

    return "Harmless log message";
  }
}

The doNotRunOnProduction() method might perform some expensive database query, send some data over the network or launch intercontinental missiles – anyways do something that you don’t want to happen in production app. If you run the code on the debug build you’ll of course get the following logs.

1
2
08-20 19:31:34.183    1819-1819/com.porcupineprogrammer.proguardgotcha E/ProguardGotcha: FIRE ZE MISSILES!
08-20 19:31:34.183    1819-1819/com.porcupineprogrammer.proguardgotcha D/ProguardGotcha: Harmless log message

Now, let’s add Proguard config that removes all the Log.d() calls:

1
2
3
-assumenosideeffects class android.util.Log {
  public static *** d(...);
}

We might expect the Log.e() call to be gone as well, but alas, here is what we get:

1
08-20 19:34:45.733    2078-2078/com.porcupineprogrammer.proguardgotcha E/ProguardGotcha: FIRE ZE MISSILES!

The key point to understanding what is happening here is the fact that the Proguard does not operate on the source code, but on the compiled bytecode. In this case, what the Proguard processes is more like this code:

1
2
3
4
5
6
7
@Override
protected void onCreate(Bundle savedInstanceState) {
  super.onCreate(savedInstanceState);

  String tmp = doNotRunOnProduction();
  Log.d(TAG, tmp);
}

One might argue that the temporary variable is not used and Proguard should remove it as well. Actually that might happen if you add some other Proguard configuration settings, but the point of this blog post is that when you specify that you want to remove calls to Log.d(), you shouldn’t expect that any other calls will be affected. They might, but if your code really launches the missiles (or does something with similar effect for your users), you don’t want to bet on this.

Introducing: Merry Cook

| Comments

After a long hiatus since November 2011 I have released another clone of classic Russian handheld game from the ‘80s – Merry Cook. I knew that the “Nu, Pogodi!” code wasn’t my top achievement, and I had to force myself into diving into it, but I feel it was worth it. Few things I think I did right this time:

  • Do not keep any game logic in QML. Qt has an excellent state machine framework, which makes writing the game logic in C++ relatively easy.
  • Keep the QML/C++ interface as simple as possible. Send signals from QML to C++ when user takes some action and update the QML UI from the C++ side by changing QProperties on some context property object. I’ve actually used two objects for that, because it made testing a bit easier.
  • Unit tests. I’ve set up the testing harness using gmock/gtest and I’ve used it to unit test some things. I probably would have been fine without them, since Merry Cook is a very simple but a) it forced me to divide stuff into more manageable classes and b) it gave me a sense of accomplishing something early. It’s funny, because even though I’m absolutely conscious of the latter fact, I think it gave me enough boost to get to the point where I had moved forward with implementation and polishing, because I really wanted to publish this game.
  • QProperty helper. I wrote an abominable macro for reducing the QProperty boilerplate:

Things still on my TODO list:

  • More tests. Besides unit tests I’d also like to write some integration tests for the state machine setup and connections, but I didn’t have time to think how this should be done without making too much state public just for testing. Maybe next time.
  • Refactor “Nu, Pogodi!”. I jumped straight into new project, but I should have started with refactoring the old crap. On the other hand, it might have sucked out all the motivation out of me, and had I done it, I wouldn’t have been writing this post right now. So, maybe next time.
  • Passing enums to QML. I have no idea what I did wrong, but I couldn’t get the QML to see my C++ enums. I’ve resorted to passing them as simple ints and using magic numbers on QML side, but it’s definitely something I should fix. Obviously not now, but next time.

Anyways, I’m really happy with the final results, especially with the gameplay experience, which I think mimics the original game very well. Try it yourself!

Gradle - First Impressions

| Comments

Android Studio kept nagging me about make implementation deprecation, so I decided to try the new build system based on Gradle. At first I obviously hit the missing Android Support Repository issue, but after installing missing component in Android SDK Manager everything was created correctly (AFAIK the v0.2.3 of Android Studio doesn’t have this issue anymore). On Mac I also had to set the ANDROID_HOME env variable to be able to build stuff from command line.

The app templates are a bit outdated, for example you might get rid of the libs/android-support-v4.jar, because gradle will anyways use the jar from aforementioned Android Support Repository. The build.gradle also references older support lib and build tools so you should probably bump it to the latest versions.

Adding the dependency to the local jar is trivially easy – we need just one line in dependencies section:

1
2
3
dependencies {
  compile files("libs/gson-2.2.4.jar")
}

You can also define dependency to every jar in libs directory:

1
2
3
dependencies {
  compile fileTree(dir: 'libs', include: '*.jar')
}

Using code annotation processors (like butterknife) is also trivial:

1
2
3
4
5
6
7
repositories {
  mavenCentral()
}

dependencies {
  compile 'com.jakewharton:butterknife:2.0.1'
}

The fist of the gradle’s ugly warts is related to the native libs support. You can add the directory with *.so files, the build will succeed, but you’ll get the runtime errors when your app will try to call native method. The workaround found on teh interwebs is to copy your native libs into the following directory structure:

1
2
3
4
lib
lib/mips/*.so
lib/other_architectures/*.so
lib/x86/*.so

NOTE: there is no typo, the top level directory should be a singular “lib”. Then you have to zip the whole thing, rename it to *.jar and include as a regular jar library. Lame, but does the trick.

Let’s get back to the good parts. The list of the tasks returned by “gradlew tasks” command contains the installDebug task, but not the installRelease one. This happens, because there is no default apk signing configuration for release builds. The simplest workaround is to use the same configuration as debug builds:

1
2
3
4
5
6
7
android {
  buildTypes {
    release {
      signingConfig signingConfigs.debug
    }
  }
}

But in the real project you should of course define the real signing configuration along the lines:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
android {
  signingConfigs {
    release {
      storeFile file("release.keystore")
      storePassword "XXX"
      keyAlias "XXX"
      keyPassword "XXX"
    }
  }

  buildTypes {
    release {
      signingConfig signingConfigs.release
    }
  }
}

The other useful setting that goes into the buildTypes section is the Proguard configuration. Proguard is disabled by default in gradle builds so we need to turn it on for release builds; we also need to specify the rules to be used by Proguard:

1
2
3
4
5
6
7
8
9
android {
  buildTypes {
    release {
      runProguard true
      proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), file('proguard').listFiles()
      signingConfig signingConfigs.release
    }
  }
}

There are two nice things about this configuration: we can easily specify that we want to use the default rules defined in Android SDK and we can specify multiple additional files. In the configuration above I use all files from ‘proguard’ directory, but you can defined a simple list of files as well. It allows you to create a reusable Proguard config files for the commonly used libraries like ActionBarSherlock or google-gson. So far so good. Let’s declare the dependency on another project (a.k.a. module):

1
2
3
dependencies {
  compile project(':submoduleA')
}

Note that this is also declared in the app project’s build.gradle. It’s perfectly fine to include this kind of dependency in your app project, but I’m not happy with this solution for declaring dependencies between subprojects, because we’re introducing dependencies to main project’s structure.

1
2
3
4
5
6
7
8
9
10
11
12
// in build.gradle in main project

dependencies {
  compile project(':submoduleA')
  compile project(':submoduleB')
}

// in build.gradle of submoduleB, which depends on submoduleA

dependencies {
  compile project(':submoduleA')
}

It’s especially bad when those subprojects are reusable libraries which should be completely separate from your main project. The workaround I read about, but haven’t tested myself is creating a local Maven repository and publishing the artifacts from subprojects. AFAIK you still have to publish the artifacts in the right order, so you still have to kind of manually manage your dependencies, which IMO defeats the purpose of having .

I feel I’m missing something elementary. The way I expect it to work is to define in each project what kind of artifacts are created, define artifacts each project depends on and let Gradle figure out the order of building subprojects. Please drop me a line if what I just wrote doesn’t make any sense, I expect too much from the build system, or I missed some basic stuff.

Another thing that’s not so great is the long startup time. Even getting the list of available tasks for a simple project takes between 5 and 8 seconds on 2012 MBP every single time. I understand why this happens – build configs theoretically can check the weather forecast and use different configuration on a rainy days – and that this overhead is negligible when compared to the actual build time, but every time I stare a this “Loading” prompt I think that this should be somehow cached.

It’s time to wrap this blog post up. The main question I asked myself was: is it worth to move to gradle? I’d say that if you have a manageable Maven build, then you shouldn’t bother (yet), but it’s a huge step forward when compared to ant builds.

MicroOrm and Thneed Available on Maven Central

| Comments

I’ve uploaded my two experimental projects, MicroOrm and Thneed, to Maven Central. If you want to try them out, just add the following lines to your pom.xml:

1
2
3
4
5
6
7
8
9
10
11
<dependency>
  <groupId>org.chalup.thneed</groupId>
  <artifactId>thneed</artifactId>
  <version>0.3</version>
</dependency>

<dependency>
  <groupId>org.chalup.microorm</groupId>
  <artifactId>microorm</artifactId>
  <version>0.2</version>
</dependency>

Don’t hesitate to email me, create an issue on github or propose a pull request. Any form of feedback is welcome!

Thneed Library

| Comments

The MicroOrm library I started a while ago solves only a tiny part of data model related problems – conversion between strongly typed objects and storage classes specific for Android. We discussed few existing libraries for data model implementation we might use at Base CRM, but we were not fully satisfied with any of them. There are two approaches to this problem:

The first approach is to define the Data Access Objects / entity objects and create SQLite tables using this data. Almost every ORM solution for Android works this way. The deal breaker for those solutions is the complete disregard for data migrations. The ORMLite docs suggest that you should just fall back to the raw queries, but this means that you need to know the schema generated from DAOs, which is a classic case of leaky abstraction.

The completely opposite approach is used in Mechanoid library. You define the database schema as a sequence of migrations and the library generates the DAOs and some other stuff. I was initially very excited about this project, but it’s in a very early state of development and the commit activity is not very high. The main problem with this concept is extensibility and customization. For both you probably have to change the way the code is generated from parsed SQLite schema. We also have some project specific issues that would makes this project hard to use.

At the end we haven’t found an acceptable solution among existing libraries and frameworks, but something good came out of our discussions. The sentence which came up again and again was “It wouldn’t be too hard to implement if we knew the relationships between our models”. Wait a minute, we do know these relationships! We just need a way to represent them in the Java code!

And so, the Thneed was born.

By itself the Thneed doesn’t do anything useful – it just lets you tell that one X has many Ys and so on, to create a relationship graph of your data models. The trick is, this graph is a Visitable part of Visitor design pattern, which means that you can write any number of Visitors to do something useful with the information about declared relationships (see the project’s readme for some ideas). Think about it as a tool for creating other tools.

The project is in a very early stage, but I’ve already started another project on top of Thneed and at this point the general idea seems sound. I’ve also learned few tricks I’ll write about in a little while. As usual, the feedback is welcome, and if you find this idea interesting, do not hesitate to star the project on Github.

Guava and minSdkVersion

| Comments

A while ago I wrote about pre-dexing feature introduced at the end of 2012, which facilitates using large Java libraries like Guava for developing Android apps. Few months later, I’m still discovering stuff in Guava that makes my life easier (BTW: I still owe you a blog post with a list of Guava goodies). But this week, for a change, I’ve managed to make my life harder with Guava.

I wanted to include the javadocs and source jars for Guava, and when I opened maven central I saw the new version and decided to upgrade from 13.0.1 to 14.0.1. Everything went smoothly except for the minor Proguard hiccup: you have to include the jar with the @Inject annotation. At least it went smoothly on the first few phones I’ve tested our app on, but on some ancient crap with Android 2.2 the app crashed with NoClassDefFoundError.

The usual suspect in this case is, of course, Proguard. I’ve also suspected the issue similar to the libphonenumber crash I wrote about in March. When both leads turned out to be a dead end, I decided to run the debug build and to my surprise it crashed as well. And there was a logcat message which pinpointed the issue: the ImmutableSet in Guava 14.0.1 depends somehow on NavigableSet interface, which is available from API level 9. Sad as I was, I downgraded the Guava back to 13.0.1 and everything started to work again.

So what have I learned today?

  • Upgrading the libraries for the sake of upgrading is bad (m’kay).
  • Before you start wrestling with Proguard, test the debug build.
  • Android 2.2 doesn’t support all Java 1.6 classes.

The scary thing is, the similar thing might happen again if some other class in Guava depends on APIs unavailable on older versions of Android. Sounds like a good idea for a weekend hack project: some way to mark or check the minSdkVersion needed to use a given class or method.