SQLite, CouchDB and SPARQL with Scripting Layer for Android

Men putting up a telegraph pole in 1909 in New South WalesThe Scripting Layer for Android (SL4A) and the new SL4A Tablet Remix have a lot of powerful features and interestingly can be used to consume data from a variety of sources both online and offline. The ability to work with some data sources, such as simple relational databases is built in, but thanks to the ability to add additional code libraries to this environment we also get the opportunity to work with non-relational databases and even Linked Data. In this article I will quickly show you how to work with three different types of data source using Python in SL4A: a relational database in the form of a SQLite file, a non-relational database in the form of CouchDB and Linked Data generated from Wikipedia which we will interrogate using the SPARQL language.

SQLite

The first option of a relational database will probably be the most familiar to a lot of developers. SL4A has support for SQLite out of the box. This is a database system designed to be embedded in programs rather than used as a database server so you still get to work with the data using SQL but you don't get some of the security and sharing features of a database designed for a server like MySQL. However SQLite is still very powerful and useful, and small enough to embed in our applications without it taking up lots of space. It is often used in Android applications. Python on SL4A comes with the sqlite3 library which lets you work with SQLite databases.

The script below creates a database, inserts some data into it and reads it back out again. Not very useful in itself but you could use it in your script to store input from users or the sensors for example. One aspect of SQLite that is very important to note is that it stores data locally in a file. This means you do not have to be online to use it, you could even copy a pre-populated file to your device and work with that. You can find more information about the sqlite3 library here: http://docs.python.org/library/sqlite3.html.

import sqlite3

# Connect to database file, create it if it does not exist
conn = sqlite3.connect('/sdcard/sqlitedemo.db')

# Get a cursor
cursor = conn.cursor()

# Create a table
cursor.execute("CREATE TABLE IF NOT EXISTS epl_teams (team text)")

# Insert some data
cursor.execute("INSERT INTO epl_teams VALUES ('Milton Keynes Lightning')")
cursor.execute("INSERT INTO epl_teams VALUES ('Slough Jets')")
conn.commit()

# Read the data back out
cursor.execute("SELECT * from epl_teams")
for row in cursor:
print row

conn.close()

CouchDB

Storing information in tables is not your only option. Have a look at my earlier tutorial: "Starting to relax with CouchDB" for a quick look at how this system works. is a great way of storing data in a more document orientated format. As a bonus you can use both local and remote data sources in your script. The easiest way to work with CouchDB locally with SL4A is to start by installing Mobile Futon from the Market.

When you have this installed, we need to alter the configuration slightly to make life a bit easier. Launch the app and tap Config then httpd. Scroll down to port and enter "5984", this is the port that CouchDB instances are usually found on. Save the config. Now we need to install the library to enable Python in SL4A to work with CouchDB. Go to http://pypi.python.org/pypi/CouchDB in a browser and download the Egg file for Python 2.6. At the time of writing this was: CouchDB-0.8-py2.6.egg. Once downloaded go into the Python for Android app and tap Import Modules - tap the file you just downloaded and the new library should get installed.

Now we can work with some data! The example script below connects to our local CouchDB instance, creates a database (or connects to it if it is already there), puts some data in it and then prints it back out again. Note: if you get an error that the script cannot connect to the database, try coming out of SL4A then launching Mobile Futon again, then going back into SL4A. It is not an idea solution, but hopefully it at least gives you the chance to experiment with CouchDB! For more information on using Python with CouchDB have a look at: http://packages.python.org/CouchDB/getting-started.html.

import couchdb

couch = couchdb.Server()

# Create db if it does not exist, connect to it if it does
try:
db = couch.create('mytestdb')
print "Created database"
except (couchdb.PreconditionFailed):
print "Database already exists"
db = couch['mytestdb']

# Create a document
doc = {"team": "Milton Keynes Lightning"}
# Save the document - will assign an id automatically as we did not state one
db.save(doc)
# And again...
doc = {"team": "Peterborough Phantoms"}
db.save(doc)

# Print out the contents of the database
for id in db:
print db[id]

SPARQL

The last destination on our quick tour of using data sources with SL4A is Linked Data – specifically interrogating Linked Data sources using the SPARQL language. Lots of information can be obtained using SPARQL without the need for painful screen scraping code. Firstly though we need to install the library we need. I have been using sparql-client which has been written by the European Environment Information and Observation Network. This library is quite straightforward to use and written in pure Python so it is perfect for SL4A.

Before we can start we need to install the library. Unfortunately the procedure is not quite as simple as the CouchDB library as the download is in a different format. Download the zip file from: http://pypi.python.org/pypi/sparql-client/. You will now need an app that can unzip downloads, I used File Station but there are a few others. Unzip the file and copy the file sparql.py from inside the archive and paste it to /sdcard/com.googlecode.pythonforandroid/extras/python. Now we have the library in the right place we can use it in our scripts.

Some of you may have noticed a bit of an ice hockey theme in my example scripts. Let's keep going with that theme with the example below which runs a SPARQL query on DBpedia to find the URLs of pages on Wikipedia relating to people who have played for my favourite ice hockey team: Milton Keynes Lightning. DBpedia is a wonderful resource which exposes information from Wikipedia in Linked Data form. This will typically be the information from the information box in the top right. To see the information on DBpedia for a topic you can just rewrite the start of the Wikipedia URL, for example try changing http://en.wikipedia.org/wiki/Milton_Keynes_Lightning to http://dbpedia.org/page/Milton_Keynes_Lightning. You can run SPARQL queries on DBPedia by using their SPARQL endpoint: http://dbpedia.org/sparql.


import sparql

q = ("""
PREFIX rdf:
PREFIX dbpedia-owl:
PREFIX foaf:
PREFIX dbpedia:
SELECT DISTINCT ?name ?page
WHERE {
?player foaf:name ?name .
?player foaf:page ?page .
?player dbpedia-owl:team dbpedia:Milton_Keynes_Lightning .
?player rdf:type dbpedia-owl:IceHockeyPlayer .
}
""")
result = sparql.query('http://dbpedia.org/sparql', q)

for row in result:
values = sparql.unpack_row(row)
print "Name: ",values[0].encode('utf-8')
print "Wikipedia page: ", values[1]

This was only a very tour of using each of these three technologies with SL4A, it would be perfectly possible to write a whole book on each! Hopefully it shows that SL4A can work with a wide variety of data sources, a fact that makes it an even more useful tool.

Photo: Working at Maitland railway station by State Records NSW

Comments

Simple and beautiful! Thanks for sharing. This is a huge help!

Thanks for the concise examples; I didn't realize that couch was even an option...

Add new comment

Comments are always very welcome, but please note the following:
  • Sadly due to the high number of spam comments recently all comments are now manually moderated. You comment will therefore not appear on the site instantly.
  • Comments on this web site are monitored for spam using Mollom. By posting a comment, you accept that your message and other personal details about you will be analysed and stored for anti-spam and quality monitoring purposes, in accordance with Mollom's privacy policy.
  • Please use your own name not a company or website name to submit comments. Your comment will be removed if you don't do this.
  • All links in comments will be marked with a no follow attribute. That means posting a link to your site here won't help your search engine rankings.
  • By submitting a comment you agree that your comment can be reproduced under the same licensing terms as the rest of the content on the site.
  • Comments can be removed at any time without explanation, but won't be removed just because you disagreed with something I said.