Disable SSLv3 in your Tomcat Connector

With the announcement last week of the POODLE vulnerability in SSLv3, I have been testing a new HTTPS Connector configuration for TOMCAT.

Most of the documentation available, that I have found, assumes that you are using Native/APR. However, LabKey’s standard TOMCAT configuration does not use the Native/APR connectors, but uses NIO connector . After a bit of testing I found that to disable SSLv3 (and SSLv2), when using BIO/NIO connectors, I used

<Connector port="443" scheme="https" secure="true"
    SSLEnabled="true" sslEnabledProtocols="TLSv1,TLSv1.1,TLSv1.2" sslProtocol="TLSv1"

During my testing, I found that if you do not use sslEnabledProtocols="TLSv1,TLSv1.1,TLSv1.2" to limit the available protocols, then using only sslProtocol="TLSv1" still allowed SSLv3 to be available.

If you are interested in seeing the rest of the HTTPS connector that LabKey uses, an example of our server.xml config file is available at


Updating the bconn/labkey-standalone Docker image for ShellShock

I needed to patch my LabKey Server docker image on hosted Docker Hub for the ShellShock bug. To do this I ran the following:

Create a container using the image I want to update. This command will start a new container, using the docker image and then connect to it for interactive session (ie shell prompt).

docker run -t -i bconn/labkey-standalone /bin/bash

Now we can install the bash update

apt-get update
apt-get install --only-upgrade bash

Commit the changes in the container to the image

docker commit -m="Installed Bash update for ShellShock" -a="bconn" d22ec3cf6c8c bconn/labkey-standalone

Push the updated image to Docker Hub

docker push bconn/labkey-standalone

IMPORTANT: If you build a new image from the Dockerfile, it will be patched automatically during build process, as the Dockerfile contains the line from ubuntu:14.04. And the ubuntu 14.04 base image has been patched for ShellShock.

Update 10/10/2014: After a little more research, it looks like there is an additional(and probably better way) to update an image hosted on Docker Hub. That process is to build a clean, new image from the Dockerfile.

For example, using a clean docker session (ie no existing containers or images), I would have followed the instructions in the Usage section at

Then I would have pushed the newly created image to Docker Hub by running

 docker push bconn/labkey-standalone


Move a static website to S3

In August, I decided it was time to upgrade this website. I wanted to accomplish the following with the upgrade:

  1. improve the readability both in the browser and on mobile devices (phones, tablets, etc).
  2. move the website from an cloud server at Rackspace to S3
    • This is a static website that is generated with Jekyll

To improve the readability of I upgraded to the lastest version of Bootstrap and followed the guidelines in Butterick’ Practical Typography.

The migration of the site to be hosted as a S3 Static Website took a little more planning and a bit of trial and error.

The rest of the entry will cover how this site was migrated to S3.

Create the S3 bucket to hold your site

I followed these instructions, however I made a few changes as my DNS is provided by Hover.

I created two buckets with the following configuration

    • Enabled website hosting was selected
      • Index Document - index.html
      • Error Document = error.html
    • Enabled access logging for the bucket
      • Access logs will be placed in the ./logs folder in bucket
    • Redirect all requests to another host name
      • Redirect requests to
    • Enabled access logging for the bucket
      • Access logs will be placed in the folder in bucket

Upload your site to the S3 bucket

For this first upload, I used the AWS S3 Console to upload the files. If you use Chrome as your browser, you can drag and drop the entire directory tree into the upload wizard.

Do not hit the Start Upload button yet. Instead

  1. click on the Set Details button
  2. Select the Reduced Redundancy Storage option
  3. click on the Set Permissions button
  4. Select Make everything public
  5. Click on the Start Upload button to start the upload of the files

At this point, I was able to access the site at

If I went to I would automatically get redirected to

Update the DNS entries for the fourproc domain

DNS for my domain is provided by Hover. The following changes were made to the DNS entries:

  • @
  • www
    • Created a CNAME record pointing to

Configure naked domain redirect

To configured the naked domain redirect, I followed the instructions at

  • Forwarded the domain to
  • Did not select Enable stealth redirection

That is it. Now you can go to and see this site.


Giving Docker a Try

In the past few weeks there has been lots of buzz about Docker and linux containers, including big announcements for Docker support and management tools by

In my futile effort to stay relevant in this ever changing world, I gave Docker a try. This weekend I spent some time experimenting and eventually building a Docker image that will run LabKey Server.

You can download the image from DockerHub. It is named bconn/labkey-standalone.

The Dockerfile, scripts and instructions for building your own image are checked into LabKey’s public samples repository.

Give it a try.


Using Python to Manage Data in a LabKey Server

At LabKey, I regularly use Python to create tools for managing the servers we use. I often use the LabKey Server Python API to interact with the LabKey Servers we are running. However, in a few instances I have been forced to write code to interact with a LabKey Server in ways not yet supported by the Python API.

Eventually this code might be included in the Python API, but until that happens I decided to share a few examples of what can be done.

  • This script shows you how to upload a file from your workstation to the FileRoot of a Folder on your LabKey Server
  • How do I use this functionality?
    • LabKey uses AWS Spot Instances when running the servers used to test LabKey Server. These Spot Instances are started using a Python script which tracks each request in a log file. To make cost accounting easier, each start/stop of a Spot Instance is tracked in a List. When the script finishes the log file is uploaded to the FileRoot of the Folder containing the List and the URL of the uploaded file is included in the List record.

  • This script shows you how to create a Study Archive and download the resulting Archive (in the form of a ZIP file) to your workstation.
  • How do I use this functionality?
    • I use a similar script to periodically move Study data from one LabKey Server to a second LabKey Server.

These scripts are available in LabKey’s Samples repository at Github. This repository is used by LabKey to share tools/scripts with the LabKey Server community. It currently contains scripts for the install and upgrade of a LabKey Server and the python scripts described above. Keep an eye on it for additional scripts that are coming soon.